I am in the process of designing an architecture based around a single Dell R910 server running Windows Server 2008 Enterprise.
I would like the server to have 8 RAID1 pairs of spinning disks, so I intend to implement:
Dell R910 Server Integrated PERC H700 Adapter with 1 SAS expander on each SAS connector (so 8 expanders in total) 7 RAID1 pairs of 143Gb 15K HDD, each paired on one connector using an expander 1 RAID1 pair of 600Gb 10K HDD, paired on the remaining connector using an expander
My main concern is not to introduce bottlenecks in this architecture, and I have the following questions.
- Will the PERC H700 Adapter act as a bottleneck for disk access?
- Will using SAS expanders for each RAID1 pair cause a bottleneck or would this be as fast as pairing disks directly attached to the SAS connectors?
- Can I mix the disks, as long as the disks in each RAID1 pair are the same? I assume so.
- Can anyone recommend any single-to-double SAS Expanders that are known to function well with the H700?
Cheers
Alex
The real suggestion I would make is this: Unless you are running the large hadron colider, in general aggregate disk bandwidth doesn't matter, only IOPS matter for the overwhelming majority of workloads. Stop trying to make spining rust disks fast - they aren't. Disk is the new tape.
If you need performance, most workloads need IOPS more than bandwidth or capacity. Buy your Dell server with the cheapest SATA drives (to get the carriers), and then replace those cheap SATA drives with smallest numbber of Intel 500-series SSDs that meets your capacity needs. Dell's SSD offerings are terribly overpriced comparted with Intel SSDs from say NewEgg, even though the Intels perform better and are more reliable than whatever Dell is shipping for SSDs (Samsung?).
Make one big RAID-5 array of SSDs. Even just 3 modern MLC SSDs in RAID-5 will absolutely destroy 16 15k spinning rust disks in terms of IOPS, by a factor of 10x or more. Sequential throughput is a non-issue for most applications, but the SSDs will also be 2x faster than spinning disks in that regard. Use large capacity 7.2k SATA disks for backup media or for archiving cold data. You'll spend less money and use less power with SSDs.
Resistance to SSDs over reliability are largely FUD from conservative storage admins and SAN vendors who love their wasteful million-dollar EMC arrays. Recent "enterprise MLC SSDs" are at least as reliable as mechanical disks, and probably much more reliable (time will tell). Wear leveling makes write lifetime a non-issue, even in the server space. Your biggest worry is firmware bugs rather than hardware failure, which is why I suggest going with Intel SSDs.