I have just installed a server that is sending HTTP video streaming files to a Web server over NFS and I want to make sure that I have the transfer speed optimized. But I don't know what sort of read/write rates are typical so I don't know if I have already achieved close to the maximum. I understand that wsize
and rsize
parameters are important but I don't know what they are defaulting to and if its worth changing them.
As per the sourceforge article on NFS, I tested write speed using:
time dd if=/dev/zero of=/mnt/data/video/testfile bs=16k count=16384
And I get a Write rate of 48MB/s.
I tested this a number of times (unmounting / mounting to clear the cache) and this speed was fairly consistent.
I tested read speed:
time dd if=/mnt/data/video/testfile of=/dev/null bs=16k
And get a Read rate of 117MB/s
The ethernet switch and all cables are good for 1Gb/s and the NICs on both machines are set to use jumbo frames, (MTU=9000) and in /etc/exportfs I set the async
option - speed is more important to me than perfect data integrity. Both machines are fairly standard HP Proliant's with 7.2K SATA drives (3G on one, 6G on the other). Both machines are using Linux 2.6.18. The machine sending the files is is running CentOS 5.5 and the machine receiving them is running RHEL 5.4 (Tikanga).
I'm hoping that someone who has tested a number of different systems can say whether the above figures are typical for NFS data transfer or if there is plenty of room to increase them.
Adam, in my opinion you ARE in the right ballpark.
Blocksize is hugely important as the sourceforge article implies.
It's unlikely that you can hit your network wire speed 110-120MB/s (1 Gb, that's bits not bytes).
For the audience, here's the article which is great: http://nfs.sourceforge.net/nfs-howto/ar01s05.html
There is of course some overhead of transferring the block, getting the response. We tested on 10 Gb Ethernet, it wasn't faster. But in theory we could run more NFS in parallel with that fatter pipe.
Take a comparison with a 1-bay NAS appliance from Synology using CIFS, they don't publish NFS figures:
http://www.synology.com/products/performance.php?lang=enu
That reads as your file server is a little faster than a 2010 1-disk NAS but worse than a 2011 model.
The maximum for 1Gb Ethernet should be in the region 110-120MB/s. Make sure you are using TCP transport for NFSv3 and not UDP, and not NFSv2. NFSv4 would be more preferable.