Up until recently I was using mongo to try and stem to tide of massive MySQL upgrades to system servers, however, after reading this:
http://www.mikealrogers.com/2010/07/mongodb-performance-durability/
I have halted production of a mongo version of my site until mongo would gain more stability.
Some have suggested I use the _change within couche DB to create a new mongo object which would solve the main issues within mongo db atm but I'm not sure about it.
I have looked into other DB's such as redis and Cassandra but I have since ditched Cassandra as you have to design for your queries and that's just too restricting for my site (bad modulisation). I'm not really looking for joins etc, I'm just looking for the ability to search within a row instead of just the columnfamily id's as it can make programming quite hard when trying to add new functionality. It would be great for a search engine or something but not so good for a real website like facebook in it's entirty (instead of just in its mail search).
I was wondering what experiences people on here have had with trying to get a workable solution to the SQL speed issues. Is there a silver bullet db for it all or is it just a case of gritting your teeth with MySQL or another SQL db if you want reliability?
Maybe a cross between a form of caching and SQL (I noticed facebook uses heavy caching on their wall pages etc...probably why).
Thanks,
You haven't really given any idea of the scale of your site which makes a huge difference to what technology/method is appropriate.
At any rate nothing is going to be a magic bullet. Most of the things you are mentioning are about making large scaling possible rather than avoiding scaling at all.
Even using something like memcached in front of the db needs a slightly different way of thinking to get the full benefit of it. One of the first things you need to do (and there can be great savings if you haven't already) is to look at and optimise the sort of queries you are making.
What is your issue with Mongo exactly? You will need to make backups anyway. It now supports replica sets so you are really not tied to a machine failure.
If you are using 25 servers now, you could easily run a replica set of a few servers and not have a single point of failure.
10-30 million rows is really not a lot. Especially since with a noSQL solution you can probably consolidate a few rows into single records. If you have 2 million users, it is even possible most of their data will fit into single records. (4 mb limit is a lot)