I'm building high capacity storage for video. It is currently at 20TB mark and keeps growing. Currently, I have 12-bay enclosure filled with 2TB drives and configured in RAID-6. I'm considering to buy another 12-bay enclosure and fill it with 2TB drives as well and I'm considering the following 2 options:
- Either expand my RAID-6 array on 12 additional drives or
- Create another RAID-6 array from 12 new drives and then use LVM to expand the original volume.
In first case I will be facing ~4-6 weeks of rebuild time before I can use extra capacity. Also, if I need to add another shelf to this array later and use same approach the rebuild time will likely be even longer.
In the second case I'm loosing some capacity and potentially performance but it seems easier to handle and availability times are more predictable.
I also plan to grow this array to 7-10 shelves in total within next 2-3 years and the entire array will comprise one huge volume formated with XFS. So, could anyone speak from their experience and elaborate on pros and cons of each approach and potential problems I will be facing as time goes by.
Thanks
Many things need to be considered how to address a solution to your problem. As you did not mention anything specific in terms of requirements (throughput, IOPS/latency, RAID rebuild time - just to name a few important parameters) I'd just answer regarding the shelf/LVM/RAID group size.
As a general rule you should try to keep your RAID6 LUNs not overly big in terms of number of disks - it doesn't make any sense to have RAID LUNs with >>20 disks as you're facing horribly long rebuild times. The big storage vendors (EMC, NetApp et al) limit their RAID group sizes to somewhat around 20 spindles (depends on disk model). But these guys know their RAID controllers, characteristics and rebuild times very well and their RAID implementations are not your run of the mill md(1) RAIDs or cheap PCI-E HBAs. It makes however perfectly sense to use LVM to stripe/concatenate your RAID6 LUNs when you add storage as you need to expand. This is exactly the use case for a volume manager.
I can't comment on the performance side of things since you didn't explain anything about your IO patters and requirements.