We are doing some tests on a new database server with 4 x 240 GB SSD disks. From what I have read RAID 10 should be faster than RAID 5 with the same "one-disk loss ok" redundancy.
However when testing with bonnie++ it seems the RAID 10 isn't any quicker than RAID 5. Any idea why?
- 4 x 240GB SSD disks, Software RAID, Ubuntu 14.04
- Intel® Xeon® E5-1650 v2 Hexa-Core Ivy Bridge-E incl. Hyper-Threading Technology 128 GB ECC RAM
- http://www.hetzner.de/en/hosting/produkte_rootserver/px120ssd
RAID5 (all 4 disks):
# cat /proc/mdstat
md2 : active raid5 sdd3[4] sdc3[2] sda3[0] sdb3[1]
688730112 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md2 647G 1.6G 613G 1% /
# bonnie++ -d /tmp -u root
Version 1.97 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
db1a 252G 1113 99 474860 26 327393 16 5943 99 1192788 23 +++++ +++
Sequential write: 0.474 G/s
Sequential rewrite: 0.327 G/s
Sequential read: 1.192 G/s
RAID10:
# cat /proc/mdstat
md2 : active raid10 sdd3[3] sdc3[2] sdb3[1] sda3[0]
459153408 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md2 431G 1.6G 408G 1% /
# bonnie++ -d /tmp -u root
Version 1.97 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
db1a 252G 1221 99 492972 27 323392 15 5688 100 1178194 23 +++++ +++
Sequential write: 0.492 G/s
Sequential rewrite: 0.323 G/s
Sequential read: 1.178 G/s
Update
I ran the RAID 10 test with iozone to see if a multithreaded benchmark would perform any better on the assumption that the 99%-100% CPU reported by bonnie++ might indicate a bottleneck:
# iozone -R -i 0 -i 1 -l 12 -u 12 -r 8k -s 22G
(12 threads, 8k block size, total file size of 264G)
" Initial write " 538817.21 0.538 G/s
" Rewrite " 511450.04 0.511 G/s
" Read " 1087437.45 1.087 G/s
" Re-read " 1201127.73 1.201 G/s
" Random read " 576435.70 0.576 G/s
" Random write " 400612.46 0.400 G/s
The results are slightly better than bonnie++ but not much.
iozone results for RAID 5:
" Initial write " 516469.10 0.516 G/s
" Rewrite " 489970.21 0.489 G/s
" Read " 1116074.84 1.116 G/s
" Re-read " 1116666.97 1.116 G/s
" Random read " 611738.43 0.611 G/s
" Random write " 199486.44 0.199 G/s
So as explained in the answers RAID 10 random write performance is twice as fast as RAID 5 but all the other stats are similar or slightly better.
I don't agree.
Let's look at reads -- here, there's no reason there should be any difference. Both let you read data from all four drives and use their full bandwidth. With RAID 5, no parity is read unless it's needed, so no difference there.
Now, let's look at writes. For RAID 10, bandwidth is halved since each write has to be done twice. With RAID 5, it's not quite so bad. We have to write out parity, but only 1/4 of the data is parity (for every 3 bytes of data we write, we have to write one byte of parity). So RAID 10 halves the bandwidth, RAID 5 has a 33% penalty. So RAID 10 is a tiny bit worse here.
Why should RAID 10 be better? (Assuming no device failures.)
I don't thing striping RAID is of any use with an SSD. Striping distrubutes work over several disk heads, but SSDs already have excellent random access.