Setup
1.5TB x 8 on a SANS Digital enclosure configured into two sets of 4 drives and connected to a Windows Server 2003 computer over eSATA. They are all identical Seagate Barracuda ST31500341AS drives.
Issue
I assumed the guy before me set the two sets of 4 drives up as RAID 0 or 5, but he really just set them up as two concatenated volumes. I laughed at the misuse of a perfectly good RAID controller and then proceeded to turn one of them into a RAID 0 array (I do not care if my backups are lost to corruption for one day).
When this was done, I was excited to benchmark my RAID 0 volume vs. the concatenated volume and show it off to my boss, but was disappointed to find out that they are the exact same speed.
So then I switched to a simple JBOD setup and benchmarked an individual disk and got about 70% of the performance and haven't changed anything since then. The r/w speed increases at a diminishing rate when I benchmark multiple drives simultaneously. Does anyone here have any experience solving a problem like this and have any suggestions?
Here's a typical benchmark result for 4 drives in a RAID 0 or concatenated set:
Test File Size: 500 MB
Testing New File Write Speed....
Data Transfer: 30.75 MB/s, CPU Load: 1.0%
Testing Write Speed....
Data Transfer: 73.73 MB/s, CPU Load: 1.4%
Testing Read Speed....
Data Transfer: 75.29 MB/s, CPU Load: 1.4%
Here's a typical single drive benchmark result:
Test File Size: 500 MB
Testing New File Write Speed....
Data Transfer: 23.98 MB/s, CPU Load: 1.6%
Testing Write Speed....
Data Transfer: 54.16 MB/s, CPU Load: 3.3%
Testing Read Speed....
Data Transfer: 50.09 MB/s, CPU Load: 1.4%
Here's the limit I seem to reach as I benchmark more and more individual drives at the same time (I got up to 4 drives at once):
Test File Size: 500 MB
Testing New File Write Speed....
Data Transfer: 73 MB/s, CPU Load: 1.6%
Testing Write Speed....
Data Transfer: 131 MB/s, CPU Load: 3.3%
Testing Read Speed....
Data Transfer: 104 MB/s, CPU Load: 1.4%
Update:
I realized the enclosure brand is SANS Digital, if that matters. The software and internal hardware are Silicon Image. Due to time constraints, I'm leaning towards a 6 disk JBOD setup with a single RAID 1 for truly important data. BackupExec handles multiple backup locations quite well anyway. The tests so far with 3 separate disks are encouraging--over the network, the backup speeds total about 40 MB/s.
Final Update:
I decided that RAID-10 wasn't worth my trouble and followed the plan from the last update. The total speed doesn't seem to be much faster, if at all. So looks like that's the maximum.
If your enclosure is attached to your 2003 server by one single eSATA connection then that's your issue: the maximum throughput of eSATA is the same as a single disk.
If this is the case unfortunately the only real benefit you'll see will probably be seek time improvements.
In order to get a decent speed on the array you'll need an internal controller connected via pci/pcie and either get internal SATA/SAS drives to attach to it or, if you can find one, use a RAID controller that can attach to each disk externally via eSATA.