I am planning to deploy a cost efficient yet performant SAN/NAS setup for our main office. Use cases: storage for a 20-30 user VDI deployment, file server, primary backup location. Required usable capacity = 10TB.
The storage software counterpart is yet to be considered. Right now I am researching the possible configurations of underlying storage hardware. I've compared the prices for 10K RPM SAS based RAID10 setup (10x2TB HDDs) and SATA SSD RAID5 configuration (7x 1.6TB SSDs). Interestingly, SSD setup comes 20% cheaper if read-intensive drives are used and costs 10% more if I choose mix-use drives. Which means that all-flash RAID5 looks like a feasible option, at least on paper.
However, a long time ago, I've experienced TONS of troubles with RAID5 in a "good old" 5x 70GB SCSI HDD configuration. Even now, that thing still gives me nightmares. Moreover, I've overlooked some threads like This and This and it looks like some people are seriously convinced that my "all-flash RAID5" plan is not going to work.
So, the question is: do you guys have any good reading on this topic or could you share your personal experience with RAID5 SSD setups? Many thanks in advance!
From my experience for that kind of production, I would recommend going with RAID5 SSDs since the implementation efficiently utilizes storage being still performant. Also, the setup minimizes RAID rebuild since fast drives are used.
https://www.starwindsoftware.com/blog/raid-5-was-great-until-high-capacity-hdds-came-into-play-but-ssds-restored-its-former-glory-2
For the project, go with hardware RAID in case your production is more about 2-3 hosts and software RAID for 4+ nodes clusters.
I agree with @pming on looking to use ZFS as a filesystem. It'll give you some good options you might be interested in. ie: deduplication, various compression options, snapshots, replication (to another pool or system for backups), ... Another thing to consider with ZFS would be using larger non-SSD drives and adding a SSD for a read or write cache.
Also consider using at least RAID6 (raidz2 in ZFS speak) over RAID5 (raidz in ZFS speak) if you go with RAID5 to help prevent dataloss.
Some of your comments hint that you plan on building up some kind of home grown system to handle your office's needs. But also slightly hint that you may buy an array or solution from a vendor. -- You may want to clarify.
Nexenta offers a good solution to build a storage system utilizing ZFS.
I use ZFS for this, on a similar amount of space. Yup, I use it for VDI/ESX. No, I don't think you should use raid6, because it's too much space overhead; raid5 is enough if you'll use 5-disk vdevs in spans (reducing cold data issue to its minimum), thus effectively using raid50 configurations, since ZFS always stripes its data when possible. If you're cautious about cold data, use scrubbing periodically (in fact, use it even if your're not cautious). Although cold data is also a problem with raid6.
Also keep in mind that there's much overhead in ZFS, you have to keep the pools no more than 85% filled (or probably use dedicated log devices, which partially eliminate this problem). Also, if you intend to use zvols, keep in mind that you have to use volblocksize at least 8 times bigger than the sector size, and this requirement isn't met automatically with newer AF drives (which you will probably get for this on your amount of space).
Besides that SSD just rocks on ZFS/raid5.
P.S. Avoid Sandisk at all costs.
I recommend to use ZFS for this. If you need a single box for different use cases, ZFS enables you to build different storage pools. You could create a mirrored stripe zpool (similar to RAID 10) with 4 SSDs for your VDI, use some larger 10k or 15k drives for file services, and some even larger 7.2k drives for a backup pool.
For example:
4 x 400 GB SSD mirrored stripe = 0.8 TB for VDI
5 x 1 TB 10K SAS raidz1 (similar to RAID5) = 4 TB for SMB, AFP, NFS, whatever
5 x 2 TB 7.2K SATA / SAS raidz1 = 8 TB for Backup (These are rather cheap to compared to fast, enterprise grade SSDs, and perhaps you don't need such speeds for Backup?)
This really depends on how much capacity you need for each of these use cases.