We received a quote for the following server setup :
- Intel Xeon E5-2609V2
- 2 x WD RE 2TB SATA III
- 34 x WD RE 4TB SAS II
- 32 GB memory (ECC)
- LSI MegaRAID 9271-8i bulk, 8 x SATA III/SAS II internal hardware RAID
- LSI CacheVault voor 9266-9271 series
We wanne add (directly) a JBOD to that server, half filled with 8TB drives, we can extend later. They suggested :
- LSI MegaRAID 9380-8e
- 22 x HGST Ultrastar 8TB He8 enterprise, SAS III
Now this was based on our previous server, which we setup as a ZFS server and did not have much "pleasure" from. (although the configuration was to blame I guess)
I have a few questions about this setup : - The argument to take 2x2TB is, use it as mirror for the system, since when a disk has to be replaced IO is sluggish during rebuild. Speed is not our real problem, space is, also we have a online backup, that will be only used as read platform (during problems). Would 36 x 4TB be a better choice ? (36 = 3*12 disk in a pool) - is 32 Gb memory enough ? (ZFS on linux, taking in consideration the JBOD at max capacity 44*8+32*4) - This is a raid controller, a JBOD/HBA (?) would be a better choice ? If so, what kind of JBOD should I be looking for ? - How would I best setup this system to be "ready" for upgrading the next 22 disks in the JBOD ? (its a 44 disk JBOD, 22 slots are filled)
Some more info based on the comments :
- uptime/availability: we don't care if it drops out for a few minutes, aslong as this is not all to common. no HA needed. In essence this will be a hot backup for our current storage server. We mainly read and writing is not speed limited. (by far)
- Reading speed is important but we don't want to give up space for it
- Write speed is not that important, mostly its streams from machine while large files are written there copy's so it can run overnight.
I would work with a ZFS professional or vendor who specializes in ZFS-based solutions. You're talking about 100TB of data, and at that scale, there's too much opportunity to screw this up.
ZFS is not an easy thing to get right; especially when you incorporate high-availability and design for resilience.
I wouldn't plan on half-filling storage enclosures or anything like that. Expanding ZFS arrays is not something you can do easily with RAIDZ1/2/3, and expanding ZFS mirrors can leave you with unbalanced data.
I am not sure I would use ZFS on Linux for such a setup, as ZoL remain a somewhat "moving target".
Regarding your RAID card, if it can be configured in JBOD, there is no problem. However, if it only work in RAID mode, I would change it for a JBOD/HBA adapter.
Anyway, as suggested by ewwhite, I would ask to a professional ZFS verdor/consultant.