If I end up having 4 or 5 medium sites on one server, I want to be sure that each one that requires memcached has at least an allotted space. Is there a simple way to do this? The only ways that come to mind would be to have separate processes on different ports for each one. Is there an easier/other way? I just don't want one site hogging up all of the ram for memcached.
I have tons of ram, and say I want to give one of my magento sites exactly 512mb for memcached. I also want to give another custom application exactly 512mb for memcached. Ideas?
Memcached has no conception of namespaces, partitions, or similar. Therefore the only way would be to run multiple instances of memcached. That's no problem though as memcached is ridiculously simple to set up (purposefully).
It can just be bound to, for example, 5 different ports (one for reach site) or 5 different IP addresses.
See here for an example: http://blog.nevalon.de/en/wie-kann-ich-mehrere-instanzen-von-memcached-auf-einem-server-laufen-lassenhow-can-i-run-multiple-instances-of-memcached-on-one-server-20090729
I agree with Niall here. Other possibility is this you can use private IP space. Say your server can be assigned 4 IP 10.x.x.1 through 4. You can launch Memcached with 4 servers and bind to each IP thereby giving all sites the same port but different memcache IP.
On top of that you can modify the init script for memcached to start all 4 servers and stop them together in one go. This can be used with either IP or the Port binding method. It will greatly simply things for you.
Here is an example of the multiple servers in one go Multiple Memcached server /etc/init.d startup script that works? (see question script source).
There is a reason memcached requires separate process, it is more to do with memory management rather than memcached itself. Separate processes sharing memory does not seem like a good idea. Memory management is best left to system.
This is not necessary at all. If you consider that memcache storage is actually working as LRU stack then it becomes obvious that it's suboptimal to give some portion of memory to site that is used less when site that should be memcached more will have smaller portion of memory and records for it will be pushed out more often than needed while site receiving less traffic will have more less unused data stored in that dedicated portion which could have been used better for more active sites that will instead of using memcached records need to reach for the data in some SQL backend.
I agree with both responses here but wanted to add some more input.
I do not think there is a way to split out namespaced objects and associated ram usage in a single memcache instance. So like the other responses say best to run multiple instances.
While this might be an easy task if this is large scale these also might be good resources to look at:
twemproxy
https://github.com/twitter/twemproxy
Allows you to setup a proxy in front of memcache. This means all sites/clients connect in to nutcracker processes which load balance across your memcache pools.
moxi
http://code.google.com/p/moxi/
Another proxy solution to load balance memcache.
So again it depends on the size of your infrastructure but these might be tools that would helpful to you in a larger or growing infrastructure. Splitting these out would allow you to have several smaller instances, and rather than adding say a new 512MB instance for every site you could stick to even smaller say 64mb instances and expand at a much smaller rate.
Its extremely unlikely that Memcache will ever consume more than around 32MB RAM for a Magento store anyway. When you consider each cached page is around 4Kb - you've got a fair bit of scope for cached content.
I would suggest setting up multiple Memcached instances using unix sockets (its faster and safer than TCP/IP). You can start Memcached with the following flags
From http://www.sonassihosting.com/blog/support/implement-memcache-for-sonassi-magento-optimised-dedicated-servers/
Your memcache local.xml config would look like this, and read this to see why the slow_backend is necessary - http://www.sonassi.com/knowledge-base/magento-kb/what-is-memcache-actually-caching-in-magento/
The easiest way will be, as your suspected, to have multiple instances of memcached. memcache is purposefully kept as simple as possible for speed so offers no internal forms of separation like what you're looking for. It doesn't even offer any form of authentication for the same reason!