In any of the RAID levels that use striping, increasing the number of physical disks usually increases performance, but also increase the chance of any one disk in the set failing. I have this idea that I shouldn't use more than about 6-8 disks in a given RAID set but that's more just passed down knowledge and not hard fact from experience. Can anyone give me good rules with reasons behind them for the max number of disks in a set?
The recommended maximum number of disks in a RAID system varies a lot. It depends on a variety of things:
For SATA-based RAID, you don't want to have more than about 6.5TB of raw disk if you're using RAID5. Go past than and RAID6 is a much better idea. This is due to the non-recoverable read error rate. If the size of the array is too large, the chances of a non-recoverable read error occurring during the array rebuild after a loss get higher and higher. If that happens, it's very bad. Having RAID6 greatly reduces this exposure. However, SATA drives have been improving in quality lately, so this may not hold true for much longer.
The number of spindles in an array doesn't really worry me over much, as it's pretty simple to get to 6.5TB with 500GB drives on U320. If doing that, it would be a good idea to put half of the drives on one channel and half on the other just to reduce I/O contention on the bus side. SATA-2 speeds are such that even just two disks transferring at max-rate can saturate a bus/channel.
SAS disks have a lower MTBF rate than SATA (again, this is beginning to change) so the rules are less firm there.
There are FC arrays that use SATA drives internally. The RAID controllers there are very sophisticated, which muddies the rules of thumb. For instance, the HP EVA line of arrays groups disks into 'disk groups' on which LUNs are laid out. The controllers purposefully place blocks for the LUNs in non-sequential locations, and perform load-leveling on the blocks behind the scenes to minimize hot-spotting. Which is a long way of saying that they do a lot of the heavy lifting for you with regards to multiple channel I/O, spindles involved in a LUN, and dealing with redundancy.
Summing up, failure rates for disks doesn't drive the rules for how many spindles are in a RAID group, performance does. For the most part.
If you are looking for performance, then it is important to understand the interconnect that you're using to attach the drives to the array. For SATA or IDE, you will be looking at 1 or 2 per channel, respecitvely (assuming that you are using a controller with independent channels). For SCSI, this depends heavily on the bus topology. Early SCSI had a device limit of 7 device IDs per chain (aka. per controller), one of which had to be the controller itself, so you would have 6 devices per SCSI chain. Newer SCSI technologies allow for nearly double that number, so you would be looking at 12+. The key here is that the combined throughput of all drives can't exceed the capacity of the interconnect, otherwise your drives will be "idling' when they are at peak performance.
Keep in mind that the drives are not the only weak link here; each interconnect without redundancy results in a single failure point. If you don't believe me, set up a RAID 5 array on a single-chain SCSI controller, then short the controller out. Can you still get to your data? Yeah, that's what I thought.
Today, things have changed a wee bit. The drives haven't advanced a lot in terms of performance, but the advancement seen is significant enough that performance tends not to be an issue unless you are working with "drive farms", in which case you're talking about an entirely different infrastructure and this answer/conversation is moot. What you will probably worry about more is data redundancy. RAID 5 was good in its heyday because of several factors, but those factors have changed. I think you'll find that RAID 10 might be more to your liking, as it will provide additional redundancy against drive failures while increasing read performance. Write performance will suffer slightly, but that can be mitigated through an increase in active channels. I would take a 4-drive RAID 10 setup over a 5-drive RAID 5 setup any day, because the RAID 10 setup can survive a (specific case of) two-drive failure, whereas the RAID 5 array would simply roll over and die with a two-drive failure. In addition to providing slightly better redundancy, you can also mitigate the "controller as a single point of failure" situation by splitting the mirror into two equal parts, with each controller handling just the stripe. In the event of a controller failure, your stripe will not be lost, only the mirror effect.
Of course, this may be completely wrong for your circumstances as well. You're going to need to look at the tradeoffs involved between speed, capacity, and redundancy. Like the old engineering joke, "better-cheaper-faster, pick any two", you'll find you can live with a configuration that suits you, even if it's not optimal.
RAID 5 I'd say 0 drives per array. See http://baarf.com/ or similar rants by any number of other sources.
RAID 6 I'd say 5 drives + 1 for each hot spare per array. Any less and you might as well do RAID 10, any more and you are pushing the risk factor and should go to RAID 10.
RAID 10 go as high as you like.
I use 7 as a "magic" maximum number. For me, it's a good compromise between space lost for redundancy (n this case, ~14%) and time to rebuild (even if the LUN is available while rebuilding) or increase size, and MTBF.
Obviously, this has worked great for me when working with SAN 14-disk enclosures. Two of our clients had 10-disk enclosures, and the magic number 7 was reduced to 5.
All-in-all, 5-7 has worked for me. Sorry, no scientific data from me either, just experience with RAID systems since 2001.
The effective maximum is the RAID controller bandwidth.
Let's say disk read max at 70MB/sec. In peak load, you can't shovel data quick enough. For a busy file server (RAID 5) or db server (RAID 10), you could hit this quickly.
SATA-2 is 300MB/S interface spec, SCSI Ultra 320 would be more consistent. You're talking 6-10 disks because you won't hit peak too often.
The limit of disks in a RAID used to be determined by the number of devices on a SCSI BUS. Up to 8 or 16 devices can be attached to a single bus and the controller counted as one device - so it was 7 or 15 disks.
Hence alot of RAIDs were 7 disks (one was a hot spare) so that meant 6 disks left - or 14 disks with 1 hot spare.
So the biggest thing about disks in a RAID group is probably how many IOPS you need.
For example a 10k RPM SCSI disk may run around 200 IOPS - if you had 7 of them in a RAID 5 - you would lose 1 disk for parity but then have 6 disks for read/writes and a theoretical maximum of 1200 IOPS - if you needed more IOPS - add more disks (200 IOPS per disk).
And the faster disks 15k RPM SAS may go up to 250 IOPS, etc.
And then there is always SSD (30,000 IOPS per disks) and they are raidable (albeit really expensive).
And I think SAS has a crazy maximum value for number of devices - like 16,000 drives
With RAID6 and SATA, I've had good success with 11 disks... And one hot-spare (some bad controllers will need two hot-spares to do a rebuild of RAID6). This is convenient since many JBOD come in groups of 12 disks like the HP MSA60.
Until you hit the bus max speed in the narrower point in the chain (raid card, links), so it can makes sense. Same thing when you add a lot of 1GbE NICs to your PCI bus, doesn't make any sense.