I have an Ubuntu server running apache2 which i expect to be hit by around 500-1000 (concurent) users for a limited amount of time. The server serves a mixture of custom (rather light) php pages connected to a postgresql db (around 20 Mb in size) and static content. The hardware is stable and pretty beefy:
- Intel Xeon E5420 @ 2.5 GHz
- 12 GB RAM
During previous rushes on this server I have increased ServerLimit, the MaxClients for the MPM modules and decreased Timeout and KeepAliveTimeout. It has worked, but been sluggish and I have a feeling more can be done. How would you suggest configuring the Apache server to handle this kind of load?
Your bottleneck in this scenario will be the PHP and what it's doing with the database. If you are opening up a new connection with each call, then you're more than likely going to hit I/O walls on the disk access more than anything else. Your best bet is to have the database and static content in a RAM disk and commit changes as needed to the database on disk. Also maintain an efficient query caching mechanism using a RAM-dedicated fast hash lookup in the PHP so that you don't burden postgresql with unnecessary processing cycles related to query cache lookups. Although postgresql is likely to be more efficient in query caching than your own code, it's not as fast as never needing to be connected to in the first place.
Your server should be able to handle things easily. Just a few notes:
These are just generic guidelines.