I have a small home server with 256MB of memory. One of my sites just got linked up on Reddit, and since I've had two occurrences of the system using too much memory and crashing to the point where I have to manually reboot. There are times when my physical memory usage rises dangerously high to 90%, and my mysql and apache services each consume about 100MB. I suspect what is happening is that a rush of visitors comes, and the system uses too much memory before it can move some to swap space (which I have around 750MB of). Should this problem even be happening? Is the only solution to get more memory? I've heard about a swappability setting in Linux that could be messed with, maybe that could help.
Any suggestions? This problem is annoying.
Don't waste time with optimizing just add one GB of memory. 256 is already the lowest end to run a web and mysql server.
I'm pretty sure that you need to manually reset because the system is using your swap space intensively. Remember that swapping was there to move long running processes that are idle out to disk space. It is terrible bad when you need the memory soon again it's called "Swap trashing". A webserver should never use swapping. The database might work better with swapping because of its internal buffer handling.
So yes, there is no other way then adding about US$ 20 for more memory (or spend a lot of time optimizing the program).
Run free -m Linux will use all available memory and cache things. Free will tell you if there is a problem.
Example
Only 80meg free however most of it is uses in the cache setup for caching file access from disk. If your swap line has memory used, then yes you'll need more memory.
yes, you need more memory to cope with situations like this.
you can also create a swapfile to help the machine limp along until you get more memory installed. this is not a cure, running from swap is SLOW - but it should stop the machine from crashing due to lack of memory, which is worse.
e.g. to make and use a 2GB (2M * 1K) swap file:
also add the swapfile to /etc/fstab if you want it to persist across reboots.
remember to swapoff the swapfile and delete it when you no longer need it, to recover the disk space used.
The short answer is: yes, you need more RAM.
I think your MaxClients directive have a high value, and your server can't handle it. Take a look at this document from Apache Documentation, this will be useful if you have 256MB, 1GB or 4GB of RAM.
One option if you are getting pounded by a digg/redit rush is to crawl your site 2 levels deep and create pure HTML copies of the first few pages. This will allow it to serve HTML and not touch the DB for a majority of the users hitting the site.
One option might be to install a caching proxy, yes, it will use resources, but will reduce the load on your db and such (depending on your site, of course).
Another solution that you might consider is to use Coral Distribution Network service. You could choose to use it during 'peak loads' or have your home servers do it all the time. You can check online on some example mod-rewrite rules to distribute your traffic.