Per GB prices of rapid SSD (like Intel X25-E) come close to prices of high-end RAM. So what extra advantages does SSD give you? What are particular reasons, why you buy SSD, instead of just putting more RAM in your server machine, to have it used as HDD cache or even create RAM-disk?
EDIT: of course I'm aware, that SSD are persistent. But so is the data in disk cache. Reading from RAM has got to be a lot faster, then reading from SSD. Also, SSD have slow write times, so no advantage over HDD there. Especially for sequential writes.
EDIT2: amount of RAM you can put is not so limited. With introduction of DDR3, it's not multiple of 2 anymore, it's multiple of 3. Standard SOHO MoBos have 6 slots, while server boards have 12 or even impressive 18 slots, supporting total of 144GB of RAM. Even if you use more cost effective 4GB memory sticks, you still can have 72GB.
So what extra advantages does SSD give you?
Why buy and SSD instead of just putting more RAM in your server machine?
When I need fast persistent storage, I use SSD.
When I need fast volatile storage I use RAM.
IF the UPS fails, or the motherboard fails, or the software crashes the OS, you lose everything in RAM.
There is simply no substitute for persistent storage.
Further, though you state the cost is similar, the cost of high performance SSDs is going to drop like a rock over the net two years.
Right now it might make sense if you have read only data, or indexes that you don't mind rebuilding, stored completely in RAM.
In cases where the cost and risk are low, you might even perform more aggressive disk caching against a slower hard drive.
But at the end of the day, if you want persistent storage AND performance, you either buy BOTH a slow hard drive and fast RAM, or you buy a high performance SSD.
In general the SSD is going to be cheaper than both the hard drive and RAM together.
But at any rate, SSDs are still niche items. You don't use an SSD unless you have specific needs.
-Adam
Predictability and flexibility.
First, lumping more RAM into a existing system helps performance a lot in the lower ranges, but the benefits drop off quickly as you get into a space where the OS doesn't really utilize the extra RAM very efficiently. At some point, the OS really will have difficulty predicting what sectors will be reread off the disc. (it reached randomness for those sectors)
Enter predictability: If you want to make sure every single record in your database is reachable in a high speed fashion, putting the entire database on high speed media definitely accomplishes this.
There are other ways to achieve this (RAM drives, special databases), but doing things this ways opens you to other issues. (power failure, being less standard means less tested generally etc...)
Flexibility of SSD is simple: Most motherboards don't support adding RAM on the fly.
Indirectly related - consider Fusion-io's ioDRIVE technology - for certain applications they're a godsend.
Pros - faster than SSD (both read and write), persistence, large(ish) capacity, blade-versions available, cheaper per GB than RAM but nearly as fast. Cons - slower than RAM, dearer per GB than SSD.
If you had an application that needed to write a fair amount, to a large dataset but have VERY quick access, ideally random then I think they really have a place. We're going to use them with Zeus ZXTM L4-7-LBs/web-caches.
I have one of the FusionIO SSDs shared up via NFS on 10GbE from a Red Hat system to an ESX cluster. It's wicked fast (>500MBps when copying 100GB files) but what I've found the limitation with it to be is my applications.
Currently I have a build script running in a VM that does about 250GB of IO for every build. It used to take 6-8 hours to run, now it's 3 hours. Now that's a great improvement but it's not the 10 times faster I was hoping I might get. In analyzing the script I found that the bottleneck is the hashing algorithm that analyzes the build files for process tracking purposes. If I swapped the algorithm I might get 3 hour builds on regular hard drives.
The moral of the story? Look at your process first, it might benefit more from code improvements than hardware improvements.
SSDs don't lose their data when you power them off - obvious but important.
Note that you can attach 80GB to any server, but 80GB of RAM is problematic - you'd need rare and expensive 8GB sticks and board with more than 8 slots.
Not to mention you can even install 160GB in a jiff...
To say the truth SSD vs RAM gives no advantages. Unless you can not upgrade your server to more RAM.
Coming up with a 512GB X25-E solution requires 8 SATA connectors and around USD 6.800.
Coming up with a 512GB RAM solution requires at least USD 50.000 and a lot of ingenuity.
Max out your RAM for fast working memory. Go with fast SSD for persistent storage. Optimizing your software to use less RAM still is worth the trouble once you hit the high numbers.
Have you done the maths comparing access speeds for RAM, SSDs and disks?
If you compare accessing something in RAM (1033MHz RAM) with fetching it from a disk (8ms seek time), RAM is something like 10 million times faster.
Now replace the disk with a SSD. Anandtech gives a random read latency of 0.22ms for Intel's X25-M. That's 36 times faster than the disk. Let's be generous, and call it 100 times faster (makes my mental arithmetic easier, too). That makes RAM 100000 times faster than SSDs.
If you can cache something in RAM, then that is the way to go. Otherwise using SSDs to cache disk data may give you some benefits. It all depends on the amount of data needing caching vs the amount of RAM and/or SSD available for caching.