I have a VPS with 768 MB RAM and a 1.13 GHZ processor. I run a php/mysql dating site and the performance is excellent and server load is generally very low.
Sometimes I place ads on Facebook and at peak times I can get 100-150 clicks within a few seconds - this causes the server to run out of memory :
Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp ....
And all users receive an error 500 page.
I am just wondering if this sounds reasonable or not - to me 100-150 does not seem to be a number that should cause apache to run out of memory.
Any advice/recommendations how to diagnose the issue highly appreciated.
Optimizing memory footprint is usually done by reducing (and limiting) these factors:
If the load is getting really heavy, you should also look into the speed of request handling (faster loading improve the total size*time of memory needed for one request)
Maybe, if your web isn't any CPU-intensive at all, and you need some extreme Req/S, try some different webserver (like nginx, or lighttpd) which behave better in such situations.
Handling spiky traffic is tough. Alternatives include: a lighter-weight HTTP server (lighttpd, nginx, et al); more physical RAM; a load balancer and additional hosts, which has the added benefit of higher availability; offloading your application code to a system separate from your HTTP server, typically by way of FastCGI; dynamically provisioning compute resources to meet load via a cloud service like EC2; or tonnes of other ideas I forgot or haven't thought of. There are some great resources on this stuff out there; the High Scalability blog, for example, covers a lot of this territory. Hope this helps!
The per-instance memory requirement for Apache is around 10MB though the exact amount varies depending on your configuration. Thus, if you wanted to serve 100 concurrent connections with Apache you'd need at least 1GB of RAM plus whatever is needed by the system, MySQL, and anything else you're running.
If you want to stop the "out of error" conditions you can adjust the
MaxClients
Apache configuration parameter to an appropriate level. To get an estimate of the memory per Apache instance look attop
output and subtract the RES and SHR columns of all httpd commands. Make sure to subtract whatever memory MySQL and the rest of the system needs. Note that you may end up with a relatively low number for MaxClients on this machine (30-50).The other answers gave a good summary of what you can do to improve how many concurrent requests you can handle. Be aware that on such a low-end system it may be difficult, though not necessarily impossible, to fit Apache/PHP/MySQL plus lighttpd/nginx/memcached/caching. How easy or hard will depend on your application and your target performance. Consider upgrading to a larger server...you'll find getting everything to fit with 2GB or 4GB much easier.
The first thing to do is to tidy up your current system. Out of the box, apache is usually configured with lots of extensions you probably don't need (particularly the auth and proxy, also if you use SSL, but only rarely, then consider removing mod_ssl and running stunnel instead). Do enable mod_deflate. Look at all the other stuff running on your system - shutdown (and disable) any services you don't need.
Next, running suphp on a dedicated machine via CGI is usually a very dumb idea - use mod_php or fastCGI.
By making your system go faster, not only do you provide better service to your customers, but you'll reduce the memory footprint. So....
Install a PHP opcode cache if you don't already have one.
Start digging into the performance of your system - change your httpd config to start logging %D and look at the product of the URL frequency and %D to identify which URLs are causing the most problems.
Lower the threshold on your slow query log in MySQL - use this parser or similar to analyse the data (note that again you should prioritize based on the product of frequency and run time).
Add an auto-prepend to enable the gz output buffer compression.
Start recording the number of running httpd processes and compare this with the available less cache/buffers from 'free' - collate the data and graph it to work out how many httpd processes you can sensibly run - then change your httpd.conf to enforce this limit. Note that disk I/O is phenomonally slow - so you need a healthy amount of memory available for caching.
Start looking at whether your server is providing good caching information with content - or if clients and proxies have to keep coming back for stuff which hasn't changed (mod_expires, mod_headers)
But sometimes you do just need more hardware. I'd recommend considering a second server rather than just upgrading the one you've got - adding round-robin DNS is trivial - and you get the added beneift of better availability (once you've worked out how to handle the database replication).