I have an app running with:
- one instance of nginx as the frontend (serving static file)
- a cluster of node.js application for the backend (using cluster and expressjs modules)
- one instance of Postgres as the DB
Is this architecture sufficient if the application needs scalability (this is only for HTTP / REST requests) for:
500 request per seconds (each requests only fetches data from the DB, those data could be several ko, and with no big computation needed after the fetch).
20000 users connected at the same time
Where could be the bottlenecks ?
One instance of nginx can handle thousands of small static files per second without breaking a sweat.
Scalability of the app layer depends on your app more than node.js - if it is storing files / session data / etc locally things could get tricky, but if you are putting all storage in a central place like the database (or maybe something like Redis for storing session data), then scaling the app layer by adding more nodes should be easy.
The database is almost always the hardest thing to scale; if you are doing mostly reads, then postgres 9.1 has some really nice hot standby features which allow you to have one read/write master database and several read-only slaves which can handle the bulk of the read work.
Growing a write-heavy database system is probably the hardest scalability problem; when one super-beefy database server can't keep up, most people end up totally rethinking and rewriting their apps (unless that's been planned for from the start, but planning multiple master databases will make a lot of things harder and slower to start with, and is very rarely needed -- the stackoverflow network is all on one database IIRC)