- HP 2012i SAN,
- 7 disks in RAID 5 with 1 hot spare,
- took several days to expand the volume from 5 to 7 300GB SAS drives.
Looking for suggestions about when and how I would determine that having 2 volumes in the SAN, each one with RAID 5, would be better??
I can add 3 more drives to the controller someday, the SAN is used for ESX/vSphere VMs.
Thank you...
I've wrestled with this question for a while. There are a number of factors determining how many disks should go into a RAID5 array. I don't know the HP 2012i, so here is my generic advice for RAID5:
RAID6 (double parity) is a way to get around the non-recoverable read error rate problem. It does increase controller overhead though, so be aware of the CPU limits on your controllers if you go there. You'll hit I/O bottlenecks faster using RAID6. If you do want to try RAID6, do some testing to see if it'll behave the way you need it to. It's a parity RAID, so it has the same performance penalties for rebuilds, expansions, and restripes as RAID5, it just lets you grow larger in a safer way.
There isn't really a clear limit on the number of disks in RAID 5 per se. The limits you'll meet are typically related to RAID 5's relatively poor write performance, and limitations imposed elsewhere (RAID controller, your own organization of data etc).
Having said that, with 7-8 disks in use you're close to upper bound on the common RAID 5 deployments. I'd guesstimate that the wast majority of RAID 5 deployments are using <10 disks. If more disks are wanted one would generally go for nested RAID levels such as RAID "50".
I'm more puzzled by your choice to keep one big array for all this. Would your needs not be better served by 2 arrays, one RAID5 for slow, mostly read data, and one RAID 10 for more I/O intensive data with more writes?
For my money, I'd do two three-disk arrays, with one shared hot spare.
If you don't have an need for a single block of space larger than a 3 disk array, then there is no reason to cram all 6 disks into 1 raid. You're not going to gain anything in performance or over-all space, and, given a two disk failure, you're probably going to be in a better place.
@Dayton Brown: The total space will be the same...3 1TB drives in a RAID5 is what, 1.8TB? 6 will be 3.6 by the same measure, so in that sense you'll have more space in that particular RAID volume, even though the total space will remain the same. The difference is, RAID5 only allows for 1 parity drive, whether there are 3 drives in the RAID or 300, so splitting the drives into manageable groups adds protection against multiple failures. Even if you lost 3 disks, for example, you'd only lose half of your data.
If you moved to RAID6, a six disk array would make more sense, because you could lose two and be okay. But most people jump straight to RAID10, and skip 6.
How much space do your NEED? You currently have ~1.995TB, could you live with ~1.140TB? if so then I'd strongly suggest you move to RAID 10 - not only will it be faster but you'll be able to lose half your disks without user impact. If you choose to go this way I'd order the extra 4 x 300GB disks now and build it with all 12 disks on day one rather than add them later - this will also handily give you ~2.280TB available too.
With RAID5, you want to keep your disk count as low as possible, while still providing the necessary space. A hot spare, per RAID5 array, is recommended to ensure reliability.
RAID6 is really a much better idea for a future addition. Fully populate the rest of your array with disks, and make a new LUN using RAID6 using all available disks, with one hot spare. Migrate your VMs to the new LUN and convert that RAID5 array to RAID6.
Lots of folks, in fact, recommend this, especially in this day and age.
Any number of disks in a RAID 5 array is too many disks. The only possible exception maybe if you want the maximum storage/$ for something where speed and a reasonably quick recovery from failure is not an issue.
You've invested in expensive hardware to build a SAN; arrays, switches, fibre, training etc. Then you've put in pretty fast (relatively) expensive disks. Furthermore you want to put VMs on it. With this investment and usage it looks like you want to make it as fast as you can so your VM users don't complain about performance. So why touch RAID 5 at all?
BAARF is old, but still relevant as using RAID5 is as bad an idea now as it was a decade ago.
Technically? I think my adaptec 5805 limits a single array to 32 discs. I would, though, NEVER put a RAID 5 over so many discs ;)
I don't have a suggestion as to the array size but I would suggest adding a second hot spare or making this RAID 6. The time required to rebuild an array of that size increases the risk of a drive failing before the array has been rebuilt.
If you have any doubts about the RAID 5 array and number of disks needed for the setup, then go through this link.