I created a 7-drive RAID5 array with mdadm and put LVM2 on the array should I want to expand it in the future. I formatted as XFS. I basically followed this tutorial line for line:
Easy to expand Linux software RAID 5 with XFS. Best practices?
When I do
hdparm -tT /dev/md0
I see read speeds of 400MB/s+
However via the network (gigabit crossover) I only get 55-60MB/s for both read and write. It's a pretty new machine (last 2 years)
What could be some sources of the bottleneck, and how can I fix them?
Thanks.
To start, you're gigabit network is 1000Mbps -> 125MBps and after overhead it's only ~100MBps. You're getting about half that, which isn't great; but you'll never hit 400MBps or even close.
What protocol are you using over the network? CIFS, AFS, NFS? If it's NFS, UDP or TCP? Jumbo frames, data frame size, async/sync? Method of testing? There's a lot of factors that could be playing into your performance.
Using NFSv3, UDP, 8k data frames, 9k MTU, and async+aio I can get 102MBps out of my gigabit network. I consider that pretty close to the limit (note: I'm running FreeBSD on both ends, not linux; and ZFS not XFS or EXT).
Measure the disk performance locally to determine if the 55-60MB/sec you're seeing is the disk and/or the network.
You can test local speeds with 'time' and 'dd':
That should write a 40GB file and tell you how long it took. It's important to write out a file larger than the available RAM, usually about twice the size if possible. This is to avoid any potential caching effect. Use whatever combination of blocksize makes sense for your stripe size and fs block size.
Then, read the file back in to test read speed:
If local performance is faster than the 55-60MB/sec you're seeing over CIFS from a remote host then look into the network (including both hosts, you can use iperf for this). If the local performance is what's capping at 55-60B/sec, you're going to need to provide more details regarding the hardware specs and configuration.