This question topically touched on this, but running on 64-bit Linux, I have a set of data caching JVMs as well as wanting a large web cache. Each cache today is configured to in gross fit in system memory (24GB) and persist to disk in LRU.
I'm curious however the performance if we **over-**allocated the cache processes and set up a SSD for a high-priority Linux swap. I'm wondering if the Linux kernel may be a little smarter/faster than our simplistic LRU process?
I'm concerned with over-allocating JVM heap and having heap pages swap by the kernel as to GC it would have to traverse said pages regularly.
I'm afraid I'm going to disagree with the other responses. Yes, an SSD will only take something like 100K writes. For a 100GB drive, that means writing 10^16 bytes, or a steady stream of 100MB/s for 3000 years. Even if load balancing is so bad that you only get 1% of that...well. Also, performance degradation is taken care of with discard support, and modern drives don't degrade noticeably with use.
Yes, having more servers and RAM is even better, but while you get 100GBytes of SSD for maybe $200 these days, a server with 100GBytes of RAM will cost you about 100x that - just for the RAM. Power consumption will likely also be a factor of 100.
I think SSD would be great for swap, going from a handful of IOPS to tens of thousands is just what you need. But: I'm only opinionating here - I'd love to see real numbers based on SSD swapping.
Edit: to answer the OP, I agree the kernel is likely to be smarter than you (no offense! :-), so you could try to overallocate with your old rotating disk, too.
My guess is your performance won't scale with your cost and efforts. My gut tells me you may be MUCH better off with additional servers packed full of RAM if you can partition your data cache in a way that makes sense.
SSD has the benefit of near zero latency (comparatively) in retrieving data, but the various buses that connect it with main memory or the network are going to slow it down considerably.
In addition to the two other very good responses, you may want to look at KSM as a way of combining identical data in ram. It was merged into linux for the 2.6.32 release.
I would suggest that SSDs are not good for a swap partition because their performance degrades over time with a large number or writes. This has to do with the fact that SSDs have a limited lifetime of writes, and therefore all kinds of tricks are played to minimize the number of times a single sector is rewritten.
Let's see now:
100GB x 100K writes = 10000TB which can be written to the SSD. 10000TB / 100MB/s = 10Ms until the SSD is worn out. 10Ms = 2778h = 116days Let's say we have a super controller which eliminates almost all hot spots, and we get 90 days...
90 days != 3000 years