I have a system with dual Intel E5-2620 v3 @ 2.40GHz, 64 GB ram, an LSI 3008 HBA, 8x 1.6 TB Intel S3510 SSDs. I've been playing benchmarking it in different configurations using fio and I've gotten some interesting results. Benchmarking each raw disk individually, I'm seeing ~450 MB/s random writes per device. In a RAID0 (stripe without parity) I'm seeing ~3500 MB/s random writes - basically 450 * 8 disks.
However, when I tried RAID 5, performance completely tanked. I realize raid5 should be significantly worse than raid0, but RAID5 performance using the same test was about 50 MB/s. I've used RAID 5 before and while the performance has never been great, I've never seen a 99% penalty for it. The fio test I'm running has 600 threads writing random data in 512k blocks. The filesystem on the device is xfs for all these tests.
Setting /sys/block/md0/md/stripe_cache_size
to 32768 (the maximum possible value) increased the overall throughput to ~130 MB/s, leading me to assume that the problem is the lack of a writeback cache like those found in hardware RAID controllers. But I'm wondering if there's anything I can do to improve the mdadm raid 5 performance. Any idea what could be causing this, or how to improve raid 5 performance?
Interestingly, I also tried a 16-disk RAID-10 (same disks plus a second LSI HBA) and the performance was ~2400 MB/s - a 33% decrease from RAID 0. Given how RAID10 works, I would have expected the performance to be nearly identical to RAID 0.
For anyone interested, here's the fio config file:
[global]
rw=randwrite
direct=1
numjobs=600
group_reporting
bs=512k
runtime=120
ramp_time=5
size=10G
[raid]
new_group
directory=/raid/
0 Answers