Is anybody here running SQL Server on solid-state drives? Have you found any specific optimization tips? I'm specifically interested in ways to reduce the frequency with which SQL Server performs small random write operations since they're the nemesis of SSD performance, particularly MLC SSD drives.
There are some obvious optimizations one can do, of course: read-heavy data should be served from the SSD, and write-heavy stuff should be left to traditional spinning disks. That includes transaction logs, naturally!
Given enough budget, of course, one would want to use SLC SSD disks like the X25-E or the Vertex Ex series or various enterprise-level offerings. But I'm also interested in tips that might benefit MLC SSD setups. I think that's an interesting area. One of my clients' clients has a small budget and a dataset that's grown immensely and they're facing a complete rewrite of close to a hundred queries in order to maintain a decent level of performance. However, I have a sneaking suspicion that less than $500 of RAM and SSD space might net them a bigger performance gain than thousands (possibly tens of thousands) of dollars worth of developer time.
Not sure what you mean by reducing the amount of small, random writes that SQL Server does. SQL Server writes out data pages only during checkpoints - so the only way to limit the number of writes is to change the checkpoint interval or don't so many IUD operations. Did you mean something else?
In all the implementations of SSDs that I've seen (a handful), it's kind of the opposite of what you're suggesting - the best use of SSDs seems to be for write-heavy transaction logs and tempdb - basically where's the biggest I/O subsystem bottleneck and stick the SSDs in there - as seek time and latency are reduced to a low constant.
Checkout this research paper that MS produced (unfortunately not hugely detailed on SQL Server specifics): Migrating Server Storage to SSDs: Analysis of Tradeoffs.
Hope this helps!
You can't modify SQL Servers IO charactaristics. Its basic unit of disk access, for data files, is an 8Kb page. It will write them mostly during a checkpoint, but will also lazy write them when it can.
SQL doesn't wait for writes to the data disk to complete before returning, it is only the log writes that must be completed. If you can keep only one database log on a disk then it will be sequential writes and will be fine on normal fast hard disks.
The performance hit from SQL's point of view is when it has to read the disks. If you can give it more memory then SQL will hold more data pages in memory, which is faster than any sort of disk, SSD or otherwise. Obviously you can also reduce the number of disk reads by creating appropriate indexes. I expect an SSD would also help with these reads because they are likely to be random and held up waiting for drive heads to move.
I don't know what database size we are talking about here, but have you might want to take a look at HyperOS. They make sata disks that are actually just a load of DDR2 ram sticks, with an SSD or 2.5 inch disk as a backup. The access pattern of the server won't matter a jot then. I wouldn't put the logs on anything like this though. Logs are what keep your data consistant, they need to go on a reliable medium and despite its back up SSD and battery and the server probably has a UPS etc, I would still feel un-easy about not having my logs on a real hard disk in some sort of failure tolerant RAID array.
Small random operations are the nemesis of traditional disks, due to head seek latency... SSDs are great at addressing exactly this.
With long, sequential operations, standard disks perform quite good, so there would be no purpose in using SSDs (from a performance standpoint, of course).
Not enoug point here yet to add into the comment thread, but if you set the DB's page size / multi read count for anything on the SSD's to a multiple of the SSD's page size this should not be an issue.
I have not worked on SQL Server in a long time so I am not sure if these options are available there. I've been doing Oracle and DB2 for the last few years and this would solve your concerns as the DB would be properly tuned to the disk characteristics.
I would recommend aligning partition where database files are stored.
I would also recommend deciding what goes on RAID 0 for perf (ldf and TempDB), and place critical data on RAID 1 (mdf).
Thirdly, you really should update drive's firmware as well as SATA controller firmware / drivers. By doing so you give the hardware company and their developers a chance to optimize perf for you.