I am currently using HAProxy in order to load balance tcp connections from clients to my Erlang app server. The connection is persistent, which means I'm limited to roughly 64K clients on an optimized server (I'm currently running HAProxy on an m1.large EC2 instance). My app server is designed to horizontally scale based on the number of TCP connections. What's worrying me though is I'll need an equal number of HAProxy servers as app servers since it's a 1:1 connection. Is there currently a way to "proxy" the tcp connection to the app server so that once HAProxy sends the client off to my Erlang server, it can free up the connection, ready to serve another client? Are there any papers, existing solutions out there I can read so that I only have to worry about the 64K limit on my app servers, and not on the load balancing servers themselves?
What makes you think you are limited to 64K clients? You should be able to serve more than that. It's not the port count that is the limiting factor, but the memory and CPU power that limits the amount of connections you can have open at any given time. Check: http://www.kegel.com/c10k.html which is dated, just think of it as a c100k or c1M problem instead. :-)
By the way the haproxy site has an excellent article on the subject of load balancing and haproxy's architecture: http://haproxy.1wt.eu/download/1.2/doc/architecture.txt
Regarding connection limit, this is a theoretical limit that normally you wouldn't reach as you'd run out of resources before that.
Quoting http://www.quora.com/TCP/What-is-the-maximum-number-of-simultaneous-TCP-connections-achieved-to-one-IP-address-and-port
"The TCP standard sets up unique connection identifiers as the tuple of local IP address, local TCP port number, remote IP address, and remote TCP port number. In your example, the local numbers are both fixed, which leaves approximately 2^32 remote IP (version 4) addresses, and 2^16 TCP port numbers, or an approximate total potential simultaneous TCP connections of 281,474,976,710,656 (2^48, or 2.81 * 10^14, or 281 trillion)."
Introduction
64k concurrent IDLE connections is peanuts for HAProxy and Erlang.
The first thing to do is enable the statistics page on HAProxy. It is a MUST have for monitoring and performance tuning.
Then let's get into limits.
The OS Connection Limit
There can only be 1 connection per tuple
client_IP:client_PORT:server_IP:server_PORT
. It comes from the way connections are stored and retrieved in the kernel (i.e. hashtable). Same on Linux and Windows.I will have to disagree with aseq about that. It is NOT a theoretical limit at all. It is a very practical limit likely reached by anyone doing moderate load testing.
Let's suppose there are 3 computers in your current setup:
All the IP are fixed and the webserver port is fixed. That leaves only one port as variable parameter thus the maximum amount of connections is limited by the amount of ports available on any single computer. There is little headroom here (see Ephemeral Ports Range). You have to get more instances, both Erlang instances AND load testing instances.
Note: Notice that users come from lots of IP naturally whereas load testers (curl, Apache ab, JMeter) are usually run on a single box with a single IP (JMeter and similar tools can scale using distributed slaves).
Note: HAProxy connections are always in pairs (one to the client + one to the internal server). Bear that in mind because most system limits must be 2*N to allow for N users.
Ephemeral Ports Range
Only a few ports are used for creating new connections. They are called
ephemeral ports
. Linux default are from 32768 to 61000.Extend the range. Check first whether there are any running services using them on your servers.
That tweak can only give 60% more ports. It won't be enough to go web scale with a single server.
Short Lived Port
Be aware that a port cannot be reused for an entire minute after being closed (see TCP states), which can make the port pool quite small (e.g. 10k port/s anyone?). There are kernel settings to change closing duration and to allow reusing closing ports.
You ain't gonna need these tweaks for persistent connections, as far as they live long enough (a couple minutes before renewing at least). It's important to be aware of the potential issue nonetheless.
HAProxy maxconn
Configure the
maxconn
setting in HAProxy. It is the maximum amount of opened connections allowed at any time.It can be configured in
global
, perfrontend
or perbackend
. The statistics page shows what is the active setting for each and everything.Linux ulimit
The ulimit is the maximum amount of files opened by a single process (sockets are files on linux). Linux default is somewhere between 1k and 10k.
HAProxy automatically configures its process ulimit based on the
maxconn
parameter.You will probably need to tweak the ulimit manually for the Erlang process.
I think the best way to answer your question is to point out that you shouldn't need a 1:1 mapping between HAProxy and your app servers. A persistent connection is possible with HAProxy through several methods. I would suggest searching the documentation for "persistent" to learn more: http://haproxy.1wt.eu/download/1.4/doc/configuration.txt.
For example, with just TCP connections, adding balance source to your config should provide persistence for you.
64k per host is a definite hard limit but the appserver handling it typically runs out of memory before that. Typically Java appservers run at 2000 concurrent connections before the 32 bit vm runs out of heap.