I'm using Openfiler 2.3 on an HP ML370 G5, Smart Array P400, SAS disks combined using RAID 1+0.
I set up an NFS share from ext3 partition using Openfiler's web based configuration, and I succeeded to mount the share from another host. Both host are connected using dedicated gigabit link.
Simple benchmark using dd
:
$ dd if=/dev/zero of=outfile bs=1000 count=2000000
2000000+0 records in
2000000+0 records out
2000000000 bytes (2.0 GB) copied, 34.4737 s, 58.0 MB/s
I see it can achieve moderate transfer speed (58.0 MB/s).
But if I copy a directory containing many small files (.php
and .jpg
, around 1-4 kB per file) of total size ~300 MB, the cp
process ends in about 10 minutes.
Is NFS not suitable for small file transfer like above case? Or is there some parameters that must be adjusted?
I don't have a lot of NFS experience, but my experience with other network file sharing protocols says that performance suffers in the "many small files" scenario nearly universally. You're incurring round-trip latency, and over a large group of files that latency adds up.
There are many reasons why transferring many small files will always be slower than transferring a single large file. For a read, the files are more likely to be scattered around the disk, requiring seeks all over the place to get them. As Evan mentioned, there's also metadata involved in the case of NFS (or any other file system for that matter!) which also complicates things.
You can try increasing your
rsize
andwsize
parameters to the NFS mount and see if that will help performance a bit. Also check out this question on tuning NFS for minimum latency as it has a lot of helpful advice that will help in the case of many small file transfers.Have you tried with a different filesystem, like XFS? It solved all my problems when doing extreme amounts of small iSCSI block transfers. No idea why.
Also, iSCSI/NFS is usually configured for pretty large dataframes (jumbo frames etc), it might hurt you if you are copying tiny files one at a time. Maybe tar'ing and then transfering would help you.
Check that you're using TCP connection ( mount -t nfs -o tcp host:/mount /target ). Performance on modern systems won't be affected, but small IOs may improve significantly if your network is loaded.
And you should also try some other filesystem; ext3 is basically the slowest of all. It's solid, well known, but it's quite unsuitable for a file server. XFS is way better, and reiserfs is also much better at small IOs.
If you want to transfer a large directory tree of small files over NFS, and you can login to the server, the best way to do it is to make a tar file that is automatically extracted on the client, as follows:
tar c mydirectory | ssh user@host tar -xf - -C destdir
That way only a single "file" gets transferred across the network and you immediately have all your files on the host.
The problem is that your share is exported with
sync
option (the default). Useasync
to speed up writes considerably. See nfs.sourceforge.net/nfs-howto/ar01s05.htmlA similar solution as Chris's answer would be to rsync your files over to the clients periodically. If you want to make two-way changes you could also use unison.