Suppose you're on dell.com right now and you're buying a server to run your MongoDB database for your small startup. You will have to handle literally tens of thousands of writes and reads per minute (but small objects). Would you go for 2 processors ? Invest more on RAM ?
I've heard (correct me if I'm wrong) MongoDB handles the most it can on the RAM and then flushes everything to the disk, in that case I should invest on a CPU with a large L2 cache, probably >40GB of RAM and a solid state drive.. right ?
Would I be better off with a high end (~$11,309, 2 expensive processors, 96GB of RAM) server or 2x(~$6,419, 2 expensive processors, 12GB of RAM) servers ?
Is Dell ok or do you have better sugestions ? (I'm outside the US, on Portugal)
Initially, you'll want to beef up on the RAM. The RAM you'll need is dependent on the amount of data you're storing, number of collections, indexes on those collections, data access patterns, etc. Lots of factors.
The most important thing is to have enough RAM to keep your indexes in RAM. Otherwise your performance will suffer dramatically as your server(s) will page constantly while Mongo moves memory mapped files in and out of RAM. In spite of all of this, we haven't seen write speed affected but everything else is. Processing writes off the queue, flushing, dumps, etc all take a dramatic hit once your indexes no longer fit in RAM.
So there is no real short answer. Basically, be smart about your indexes. Only use what you need. Keep collections small if you can (ie break out into multiple where you can.) Capped collections are also interesting to look into.
It is very important to use a 64 bit machine not 32 bit. http://blog.mongodb.org/post/137788967/32-bit-limitations
With MongoDB what you want is RAM. And then some more RAM. Buying RAM can't hurt.
If you're at the stage of buying production hardware then the application you're running must already be written, right? So run the app on hardware you have and take metrics. Gradually change some components and take more metrics. When you're done, you'll know which points of focus are most important for your application and scenario.
First - buy as much RAM as you can. Second limiting factor is disk speed. RAID helps. SSD helps. More shards help. Measure throughput comparing to disk efficiency and required response times, then decide what to do within the budget you have.
I would wonder if a Linux clustered solution would be a better, cheaper alternative.
MongoDB lets you distribute data over many servers. That will be impossible with one, honking server.
I thought MongoDB was one of the next steps taken after finding out that deploying a relational database on a honking server didn't scale well enough.
Tens of thousands write per minute is nothing. You can get 50.000 or more writes per second on decent hardware. Hardware specs really depends on what you are trying to do. In general enough RAM for large databases and a fast IO systems are important beside a decent CPU...
It is important to establish a solid baseline prior designing your hardware. Generally expect these kind of questions to be asked by the experienced mongoDB folks before anyone can even consider answering your question.
Current Application Stats (if any)
Data Ingestion Work Load
Query Patterns & Performance Expectations
Anticipated Access Patterns