We are moving from a 1 webserver setup to a two webserver setup and I need to start sharing PHP sessions between the two load balanced machines. We already have memcached installed (and started) and so I was pleasantly surprized that I could accomplish sharing sessions between the new servers by changing only 3 lines in the php.ini
file (the session.save_handler and session.save_path):
I replaced:
session.save_handler = files
with:
session.save_handler = memcache
Then on the master webserver I set the session.save_path
to point to localhost:
session.save_path="tcp://localhost:11211"
and on the slave webserver I set the session.save_path
to point to the master:
session.save_path="tcp://192.168.0.1:11211"
Job done, I tested it and it works. But...
Obviously using memcache means the sessions are in RAM and will be lost if a machine is rebooted or the memcache daemon crashes - I'm a little concerned by this but I am a bit more worried about the network traffic between the two webservers (especially as we scale up) because whenever someone is load balanced to the slave webserver their sessions will be fetched across the network from the master webserver. I was wondering if I could define two save_paths
so the machines look in their own session storage before using the network. For example:
Master:
session.save_path="tcp://localhost:11211, tcp://192.168.0.2:11211"
Slave:
session.save_path="tcp://localhost:11211, tcp://192.168.0.1:11211"
Would this successfully share sessions across the servers AND help performance? i.e save network traffic 50% of the time. Or is this technique only for failovers (e.g. when one memcache daemon is unreachable)?
Note: I'm not really asking specifically about memcache replication - more about whether the PHP memcache client can peak inside each memcache daemon in a pool, return a session if it finds one and only create a new session if it doesn't find one in all the stores. As I'm writing this I'm thinking I'm asking a bit much from PHP, lol...
Assume: no sticky-sessions, round-robin load balancing, LAMP servers.
Disclaimer: You'd be mad to listen to me without doing a tonne of testing AND getting a 2nd opinion from someone qualified - I'm new to this game.
The efficiency improvement idea proposed in this question won't work. The main mistake that I made was to think that the order that the memcached stores are defined in the pool dictates some kind of priority. This is not the case. When you define a pool of memached daemons (e.g. using
session.save_path="tcp://192.168.0.1:11211, tcp://192.168.0.2:11211"
) you can't know which store will be used. Data is distributed evenly, meaning that a item might be stored in the first, or it could be the last (or it could be both if the memcache client is configured to replicate - note it is the client that handles replication, the memcached server does not do it itself). Either way will mean that using localhost as the first in the pool won't improve performance - there is a 50% chance of hitting either store.Having done a little bit of testing and research I have concluded that you CAN share sessions across servers using memcache BUT you probably don't want to - it doesn't seem to be popular because it doesn't scale as well as using a shared database at it is not as robust. I'd appreciate feedback on this so I can learn more...
Tip 1: If you want to share sessions across 2 servers using memcache:
Ensure you answered Yes to "Enable memcache session handler support?" when you installed the PHP memcache client and add the following in your
/etc/php.d/memcache.ini
file:On webserver 1 (IP: 192.168.0.1):
On webserver 2 (IP: 192.168.0.2):
Tip 2: If you want to share sessions across 2 servers using memcache AND have failover support:
Add the following to your
/etc/php.d/memcache.ini
file:On webserver 1 (IP: 192.168.0.1):
On webserver 2 (IP: 192.168.0.2):
Notes:
session.save_path
on all servers.Tip 3: If you want to share sessions using memcache AND have transparent failover support:
Same as tip 2 except you need to add the following to your
/etc/php.d/memcache.ini
file:Notes:
get's
are retried on the mirrors. This will mean that users do not loose their session in the case of one memcache daemon failure.memcache.session_redundancy
is for session redundancy but there is also amemcache.redundancy
ini option that can be used by your PHP application code if you want it to have a different level of redundancy.Re: Tip 3 above (for anyone else who happens to come across this via google), it seems that at least presently in order for this to work you must use
memcache.session_redundancy = N+1
for N servers in your pool, at least that seems to be the minimum threshold value that works. (Tested with php 5.3.3 on debian stable, pecl memcache 3.0.6, two memcached servers.session_redundancy=2
would fail as soon as I turned off the first server in thesave_path
,session_redundancy=3
works fine.)This seems to be captured in these bug reports:
Along with the php.ini settings shown above ensure the following are set too:
Then you'll get full failover and client-side redundancy. The caveat with this approach is that if memcached is down on localhost there will always be a read miss before the php memcache client tries the next server in the pool specified in session.save_path
Just bear in mind that this affects the global settings for the php memcache client running on your web server.
memcached doesn't work that way (please correct me if I'm wrong!)
If you want your application to have redundant session storage, you have to create something that alters/add/deletes entries to both memcached instances. memcached doesn't handle this, the only thing it provides is as key hash storage. So no replication, synchronization, nothing, nada.
I hope I am not wrong on this matter, but this is what I know of memcached, been a few years since I touched it.
memcached doesn't replicate out of the box, but repcached (a patched memcached) does. However if you're already using mysql then why not just use its replication functionality with master-master replication and get the benefit of full data replication.
C.