We (and by we I mean Jeff) are looking into the possibility of using Consumer MLC SSD disks in our backup data center.
We want to try to keep costs down and usable space up - so the Intel X25-E's are pretty much out at about 700$ each and 64GB of capacity.
What we are thinking of doing is to buy some of the lower end SSD's that offer more capacity at a lower price point. My boss doesn't think spending about 5k for disks in servers running out of the backup data center is worth the investment.
These drives would be used in a 6 drive RAID array on a Lenovo RD120. The RAID controller is an Adaptec 8k (rebranded Lenovo).
Just how dangerous of an approach is this and what can be done to mitigate these dangers?
A few thoughts;
Good luck - just don't 'fry' them with writes :)
I did find this link, which has an interesting and thorough analysis of MLC vs SLC SSDs in servers
Note that some MLC SSD vendors claim that their drives are "enterprisey" enough to survive the writes:
There is further analysis of these claims at AnandTech.
Additionally, now Intel has gone on the record saying that SLC might be overkill in servers 90% of the time:
Intel, even for their server oriented SSD drives, has migrated away from SLC to MLC with very high "overprovisioning" space with the new Intel SSD 710 series. These drives allocate up to 20% of overall storage for redundancy internally:
Always base these sorts of things on facts rather than supposition. IN this case, collecting facts is easy: record longish-term read/write IOPS profiles of your production systems, and then figure out what you can live with in a disaster recovery scenario. You should use something like the 99th percentile as your measurement. Do not use averages when measuring IOPS cpacity - the peaks are all that matter! Then you need to buy the required capacity and IOPS as needed for your DR site. SSDs may be the best way to do that, or maybe not.
So, for example, if your production applications require 7500 IOPS at the 99th percentile, you might decide you can live with 5000 IOPS in a disaster. But that's at least 25 15K disks required right there at your DR site, so SSD might be a better choice if your capacity needs are small (sounds like they are). But if you only measure that you do 400 IOPS in production, just buy 6 SATA drives, save yourself some coin, and use the extra space for storing more backup snapshots at the DR site. You can also separate reads and writes in your data collection to figure out just how long non-enterprise SSDs will last for your workload based on their specifications.
Also remember that DR systems might have smaller memory than production, which means more IOPS are needed (more swapping and less filesystem cache).
Even if the MLS SSD only lasted for one year, in a years time the replacements will be a lot cheaper. So can you cope with having to replace the MLS SSD when they where out?
As the original question is really interesting but all answers are quite old, I would like to give an updated answer.
As of 2020, current consumer SSDs (or at least the one from top-tier brands) are very reliable. Controller failure is quite rare and they correctly honor write barriers / syncs / flushes / FUAs, which means good things for data durability. Albeit using TLC flash, they sport quite good endurance rating.
However, by using TLC chips, their flash page size and program time is much higher than old SLC or MLC drives. This means that their private DRAM cache is critical to achieve good write performance. Disabling that cache will wreak havok on any TLC (or even MLC, albeit with lower impact) write IOPs. Moreover, any write patter which effectively bypasses the write-combining function of the DRAM cache (ie: small synchronous writes done by fsync-rich workload) is bound to see very low performance. At the same time write amplification will skyrocket, wearing the SSD much faster than expected.
A pratical example: my laptop has the OEM variant of a Samsung 960 EVO - a fast M.2 SSD. When hammered with random writes it provide excellent IOPs, unless using
fsync
writes: in this case it is only good for ~300 IOPs (measured withfio
), which is a far cry from the 100K+ IOPs delivered without forcing syncs.Point is that many enterprise workload (ie: databases, virtual machines, etc) are
fsync
heavy, being unfavorable to consumer SSDs. Of course if your workload is read-centric, this would not apply; however, if using something as PostgreSQL on a consumer SSDs you can be deluded by the results.Another thing to consider is the eventual use of a RAID controller with BBU (or powerloss-protected) writeback cache. Most such controllers disable the SSD DRAM private cache, leading to much lower performance than expected. Some controller supports re-enabling it, but not all of them pass down the required sync/barrier/FUAs to get reliable data storage on consumer SSDs.
For example, older PERC controllers (eg: 6/i) announced themselves as write-through devices, effectively telling the OS to not issue cache flushes at all. A consumer SSD connected to such a controller can be unreliable unless its cache is disabled (or the controller using extra undocumented care), which means low performance.
Not all controllers behave in this manner - for exampler, newer PERC H710+ controllers announce themselves as write-back devices, enabling the OS to issues cache flushes as required. The controller can ignores these flushes unless the attached disks have their cache enabled: in this last case, they should pass down the required sync/flushes.
However this is all controller (and firmware) related; being HW RAID controllers black boxes, one can not be sure about their specific behavior and only hope for the best. It is worth noting that open sources RAID implementation (ie: Linux MDRAID and ZFS mirroring/ZRAID) are much more controllable beasts, and generally much better at extracting performance from consumer SSDs. For this reason I use opensource software RAID whenever possible, especially when using consumer SSDs.
Enterprise-grade SSD with a powerloss protected writeback cache are immune from all these problems: having a non-volatile cache they can ignore sync/flush requests, providing very high performance and low write amplification irrespective of HW RAID controllers. Considering how low the prices for enterprise-grade SATA SSDs are nowadays, I often see no value in using consumer SSDs in busy servers (unless the intended workload is read-centric or otherwise fsync-poor).
A Whitepaper on the differences between SLC and MLC from SuperTalent puts the endurance of MLC and a 10th of the endurance of an SLC SSD but the chances are the MLS SSD's will outlive the hardware you are putting them into anyway. I'm not sure how reliable those statistics/facts are from SuperTalent though.
Assuming you get a similar level of support from the supplier of the MLC SSD's then the lower price point makes it worth a shot.
If we set the write quantity problem aside (or prove that consumer level SSDs can handle it), I think SSDs are a good thing to add to enterprise-level environments. You will probably be using the SSDs in a RAID array. RAID5 or RAID6. And the problem with these is that after a single drive failure, the array becomes increasingly vulnerable to failure. And the time to rebuild it depends heavily on the volume of the array. A several TB array can take days to rebuild, while being constantly accessed. In case of SSDs, the RAID-arrays will a) be inevitably smaller b) rebuild time decreases drastically.
You should just calculate the amount of daily writes you have with your current set-up and compare that with what the manufacturer guarantees their SSD drives can sustain. Intel seems to be the most up-front about this - for example, take a look at their mainstream SSD drive datasheets: http://www.intel.com/design/flash/nand/mainstream/technicaldocuments.htm
Section 3.5 (3.5.4, specifically) of the specs document says that you're guaranteed to have your drive last at least 5 years with 20GB of writes per day. I assume that's being calculated when using the entire drive capacity and not provisioning any free space for writes yourself.
Also interesting is the datasheet regarding using mainstream SSDs in an enterprise environment.
I deployed a couple of 32gb SLC drives a couple of years ago as a buffer for some hideously poorly designed app we were using.
The application was 90% small writes (< 4k) and was running consistently (24/7) at 14k w/s once on the SSD drives. They were configured RAID 1, everything was rosy, latency was low!
However roughly one month in and the first drive packed up, literally within 3 hours, the second drive had died as well. RAID 1 not such a good plan after all :)
I would agree with the other posters on some sort of RAID 6 if nothing else it spreads those writes out across more drives.
Now bear in mind this was a couple of years ago and these things are much more reliable now and you may not have a similar I/O profile.
The app has been re-engineered, however as a stop gap which may or may not help you, we created a large ram disk, created some scripts to rebuild/backup the ram disk and take the hit of a hour or so loss on data/recovery time.
Again, your the life cycle of your data may be different.