I've got software raid 1 for / and /home and it seems I'm not getting the right speed out of it.
Reading from md0 I get around 100 MB/sec Reading from sda or sdb I get around 95-105 MB/sec
I thought I would get more speed (while reading data) from two drives. I don't know what is the problem.
I'm using kernel 2.6.31-18
hdparm -tT /dev/md0
/dev/md0: Timing cached reads: 2078 MB in 2.00 seconds = 1039.72 MB/sec Timing buffered disk reads: 304 MB in 3.01 seconds = 100.96 MB/sec
hdparm -tT /dev/sda
/dev/sda: Timing cached reads: 2084 MB in 2.00 seconds = 1041.93 MB/sec Timing buffered disk reads: 316 MB in 3.02 seconds = 104.77 MB/sec
hdparm -tT /dev/sdb
/dev/sdb: Timing cached reads: 2150 MB in 2.00 seconds = 1075.94 MB/sec Timing buffered disk reads: 302 MB in 3.01 seconds = 100.47 MB/sec
Edit: Raid 1
Take a look at the following article at nixCraf, HowTo: Speed Up Linux Software Raid Building And Re-syncing.
It explains the different settings in /proc that can be adjusted to influence the software raid speed. (Not just during building/syncing as the title suggests.)
What kind of RAID?
Any combination of 0 and 1 will give no great improvement to a non-concurrent benchmarks for latency or bandwidth. RAID 3/5 should give better bandwidth but no difference in latency.
C.
The problem is that, in spite of your intuition, Linux software RAID 1 does not use both drives for a single read operation. To get a speed benefit, you need to have two separate read operations running in parallel.
Reading a single large file will never be faster with RAID 1.
To get the same level of redundancy, with the expected speed benefit, you need to use RAID 10 with a "far" layout. This strips the data and mirrors it across the two disks. The disks are each separated into segments. With two segments, stripes on drive 1, segment 1 are copied to drive 2, segment 2. Drive 1, seg 2 is copied to drive 2, seg 1. Detailed explanation.
As you can see with these benchmarks RAID 10,f2 gets read speeds similar to RAID 0:
f2 simply means far layout with 2 segments.
Furthermore, in my personal tests, I found that write performance was suffering. Notice that the above benchmarks suggest that with RAID10,f2 the write speed should be nearly equivalent to a single disk. I found I was getting almost a 30% decrease in speed. After much experimentation I found that changing the IO scheduler from cfq to deadline fixed the issue.
echo deadline > /sys/block/md0/queue/scheduler
Here is some more information: http://www.cyberciti.biz/faq/linux-change-io-scheduler-for-harddisk/
With this setup, you should be able to get sequential reads about about 185-190 MB/s.