I am hosting a website for a client on a dedicated VPS that has 512mb of RAM. Within the last 2 weeks, it has hosed up twice due to 0% free memory (or free swap). The site is pretty database intensive, as it's running Drupal with 60+ modules. It gets, on average (according to the website owner), about 6,000 visitors/month (~200/day). It will probably grow fairly quickly though - it was just launched less than a year ago.
So I'm sitting down to finally get a grasp on understanding Apache. I've hand configured httpd.conf before, but not fully understanding all of the intricacies.
I've learned that I should calculate my average Apache process size in Mb by MaxClients, and that this number shouldn't go above the memory available to the system. Each process size is a little under 7% (this would be about 1.4Mb, right?) according to Top. 512/1.5 = 341... this seems awfully big to me. Am I misunderstanding something? At first, I thought that I should calculate the process size by the percentage (in this case, ~7). Perhaps I was right the first time?
This gives a little room for other OS processes.
The database (MySQL) is on the same host.
My question is twofold. 1) I'm thinking about installing Varnish to help reduce the database load (almost all visitors are unauthenticated), to speed up initial response time, etc... For a system with this little amount of memory, am I insane? I'm thinking about giving it 256Mb if I do so. Thoughts? Obviously, things wouldn't stay in the cache for a long time. 256/7 = ~36 pages. My hope in this is, however, that the "main" pages would be cached. The homepage, and some of the main pages behind the homepage, are going to be pretty database intensive, and I'd like to reduce the amount of disk IO as much as possible.
2) If I did install Varnish, I'm wondering if I should tweak the Apache settings to half of what they're currently at, since I've given Varnish half the memory. What's the relationship between Varnish and Apache in this low-level configuration?
You need to get a larger VM. You start your post out telling us that your VM has "hosed up" twice now due to memory exhaustion and swapping. How would adding yet another application that benefits from a large amount of memory help your situation? It won't.
Let's break down your little VM and its memory usage.
First, you have 512 MB of RAM. Take 100 to 125 MB (20-25%) of this and completely remove it from your calculations. This RAM is needed by your kernel, supporting processes, buffers and cache. This leaves you with 400 MB of RAM (split the difference).
MySQL
Let's say you want to use half of your 400 MB, giving MySQL 200MB. Let me make it clear that I'm not familiar with Drupal's requirements, or whether it uses MyISAM or InnoDB. If you were configuring InnoDB you'd use the
innodb_buffer_pool_size
variable and just set that to 200M. You can expect MySQL to use more than this however for things like query cache (if used), open tables, connection handling, thread tracking, sort buffers, join buffers, and countless other configuration options. If you're using MyISAM it's even more complicated because there are a lot more variables involved,key_buffer
andmyisam_sort_buffer
are just two of several. So, assuming InnoDB with a 200Minnodb_buffer_pool_size
and the query cache disabled let's say MySQL consumes 216 MB of RAM.Apache
You now have 184 MB of RAM left for Apache to use. First, lets take a moment to clear up some of the really confusing things in your question.
No. You observe your average httpd process size when your site is in use. Using the average size per httpd process (assuming prefork MPM, the default) you calculate what
MaxClients
can be so that you don't exceed the memory allotted to httpd or the machine, causing it to swap.Yes, you are. First, stop using percentage to "calculate" the size of the httpd processes.
Edit
Wait! What? 7% of 512 is 35.84. I'm not sure where you got 1.4 Mb from. My answer still stands, and I won't be adjusting my answer to compensate for your 35M httpd processes.
End Edit
The size of the httpd processes is listed plainly in top under the
RES
column. For example:Technically the memory used per process the difference of
RES
andSHR
(shared). This is becauseSHR
is memory shared by other processes. You can see on the example I showed you this is roughly 9 MB on average, for my unique use case. This is simply a machine running Cacti with virtually no traffic -- maybe 5-10 hits per day if I happen to look at it. I am skeptical that Drupal utilizes so little memory, but you'll be able to easily tell. It definitely uses much more than 1.4 MB.Now, lets take a very unrealistic assumption that your httpd processes will utilize an even 10 MB of RAM every time. With 184 MB of RAM "allocated" for Apache, this leaves you with a MaxClients of 18 (10 MB * 18 = 180 MB). Much much much less than 341.
Varnish
First, lets evaluate the current state of your server. Assuming you properly configured MySQL and httpd to not swap under load, you're running with what is a pretty anemic MySQL configuration and an httpd configuration that will start to refuse requests if you ever get more than 18 concurrent requests. By any standards this machine is in no shape to handle traffic that will "grow very quickly".
Now you want to add a third application in and allocate 256MB of RAM to it?! That RAM will have to come from either MySQL or Apache, and maybe you could get away with stealing some from the OS itself. Either way you're further gimping one of the core services on your machine.
It's technically possible that you could find the sweet spot of configuration settings for Varnish, Apache, and MySQL on the same host that allowed all to operate at ideal efficiency with just the right amount of RAM, but I'm skeptical.
The Solution
Use what I've taught you about configuring MySQL and Apache correctly to do exactly that: configure them correctly. Your
MaxClients
should be nowhere near 300, very likely under 20, and quite possibly under 10. Another thing I haven't mentioned is that httpd processes can be a little reluctant to relinquish RAM when they've "peaked" much higher than the average. e.g. If a httpd worker hit 20MB for a single request, that worker will continue to use 20MB indefinitely (afaik) until it is reaped. You can address this by lowering yourMaxRequestsPerChild
setting. Lowering this means child processes are reaped more frequently. This will slow down your performance under load (forking new processes is relatively expensive), but it will help keep your memory usage manageable.Configured properly your server should never swap. If you configure your server properly and you see issues such as refused connections under load, then I would suggest either expanding your VM, or look into adding Varnish on a separate dedicated VM.
You're off to the right start by reading the docs and seeking help online. If you get stuck or need in-depth help please feel free to ask in another question, but don't forget to search first! It's quite possible that you can find your answer in another.