We are trying to design an architecture that will be able to handle more than 64k websockets.
We first tried with Amazon ELB, but its design doesn't allow unexpected spike of traffic nor websocket. (TCP mode timeout the websockets unexpectedly)
With HAProxy, those limits do not apply, but we'll be limited to ~64k websockets maintained between HA and the back-end servers.
Multiple solutions that came to mind :
- Multiple HAProxy instances, load balance with DNS (Route53 have a weighted option)
- Two HAProxy instances with Keepalived, multiple internal IP addresses (not sure if it's doable)
Is there a better way to do this ?
If your 64k limit is due to source ports, you can do something like the following (a little hacky, but it was we currently do at SE for websockets (we have something like .5 million concurrent usually with HAProxy):
Also multiple instances is doable with keepalived. Just do something like round robin DNS over multiple IPs. Just ensure that the IPs always get picked up by active load balancers since DNS itself won't give you the load balancing (there are more options here as well, this one is just simple).
You could setup multiple HAproxy systems which share same IPs using Anycast and BGP or some other border routing protocol. This way all HAproxy systems are active; if any of those goes down you stop advertising BGP route on that system and it will in ~30 sec stop receiving traffic; which will be re-distributed to other available systems that advertise same range.
For instance check this url on how to set up such layout