I noticed that writing to raid-6 is very low, but when I make tests with hdparm the speed is reasonable:
dd if=/dev/zero of=/store/01/test.tmp bs=1M count=10000
Gives: 50Mb/s or even less.
The hdparm gives: hdparm --direct -t /dev/vg_store_01/logical_vg_store_01 Gives 450MB/s
Why the file writings are low than hdparm test? Are there some kernel limit should be tuned?
I have an Areca 1680 adapter with 16x1Tb SAS disks, scientific linux 6.0
EDIT
My bad. Sorry all units are in MB/s
More on hardware:
2 areca contollers in dual quadcore machine. 16Gb ram
the firmware for sas backplane and areca is recent one.
the disks are seagate 7.200 rpm 16x1Tb x2 raid boxes.
each 8 disks are raid6, so total 4 volumes with lba=64.
two volumes groupped by striped lvm and formatted ext4
the stripe size is 128
when I format the volume I can see by iotop it writes 400mb/s
iostat shows also that both lvm member drives are writing with 450MB/s
FINALLY WRITING with 1600GB/s
One of the raids was degrading the performance due to bad disk. It is strange that disk in the jbod mode gives 100MB/s with hdparm as others. After heavy IO, it was reporting in the log files Write Error(not it has 10 of them). The raid still was not failing or degrading.
Well after replacement my configuration is following:
- 2xARC1680 controllers with
- RAID0 with 16x1Tb SAS disks stripe 128 lba64
- RAID0 with 16x1Tb SAS disks stripe 128 lba64
volume group with 128K stripe size
formatted to XFS
Direct
hdparm --direct -t /dev/vg_store01/vg_logical_store01
/dev/vg_store01/vg_logical_store01: Timing O_DIRECT disk reads: 4910 MB in 3.00 seconds = 1636.13 MB/sec
No Direct
hdparm -t /dev/vg_store01/vg_logical_store01
/dev/vg_store01/vg_logical_store01: Timing buffered disk reads: 1648 MB in 3.00 seconds = 548.94 MB/sec
** dd test DIRECT**
dd if=/dev/zero of=/store/01/test.tmp bs=1M count=10000 oflag=direct 10000+0 records in 10000+0 records out 10485760000 bytes (10 GB) copied, 8.87402 s, 1.2 GB/s
** WITHOUT DIRECT**
dd if=/dev/zero of=/store/01/test.tmp bs=1M count=10000 10000+0 records in 10000+0 records out 10485760000 bytes (10 GB) copied, 19.1996 s, 546 MB/s
Check if your FS is aligned with RAID dimensions. I'm getting 320MB/s on RAID-6 array with 8 x 2TB SATA drives on XFS and I think it is limited by 3Gb/s SAS channel rater then RAID-6 performance. You can get some ideas on alignment from this thread.
Unfortunately you're comparing apples with oranges.
450Mb/s = 56MB/s which is about on par with what you're seeing in real life. They're both giving you the same reading (but one is in bits, one is in bytes). You need to divide 450 by 8 to get the same measure for both.
(In your question, you've got the capitalisation the other way around, now I can only hope/assume that this is a typo error, because if you reverse the capitalisation you get an almost perfect match)
Check if you can enable write cache on the raid controller
its best if you have battery on your controller otherwise you may lose data during power failure
Doesn't the hdparm test basically perform buffered read tests? You can't compared buffered read speeds to actually write speeds and expected them to be equal on a RAID6 device.
While I would expect better performance than 50MB/sec writes on RAID6 of that size with quality drives (1TB SAS or 1TB SATA?), I wouldn't expect 450MB/sec write speeds.
hdparm
does not test write performance, it's "read-only". Moreover, it tests actually block I/O read performance, but the way you invokedd
makes it test both write and filesystem performance as well (and RAID-5,6 write is noticeable slower than read by design). If your FS is EXT3, for e. g., you can easily get poor performance having not formatted it properly (taking not into consideration full stripe size parameter of your RAID).Also, there's quite a big number of people who tend to use rather small stripe sizes which leads to suboptimal disk I/O. What was your stripe size choice when creating this RAID?
Another question is how
dd
's numbers differ while varyingbs
parameter? Have you tried using full stripe write size for it?