We have a large web project (about 2-3k requests per second), using haproxy (http://haproxy.1wt.eu/) as a frontend and load balancer between the java application servers. The frontend (haproxy) is running on Linux but we are going to migrate it to the Solaris 10 as all our other servers are running under Solaris.
After switching a traffic I see the two things: a) the web site became loading slower (5-10 seconds with images in comparison to 2-3 seconds on Linux) b) sometimes haproxy fails to perform a "lifecheck" (get a special web page and analyze http response code) due to the socket timeout. After switching traffic back to Linux everything is okay.
I've tried to tune all params I found in /dev/tcp but no progress. I believe the problem is in some open socket limitations. If someone can point me to the answer, I would be greatly appreciated.
p.s. haproxy is running under Xen DomU on Linux (Kernel 2.6.18, Debian 5), under zone on Solaris (10 u8). the only thing we did on Linux is increasing of ip_conntrack_max (I believe Solaris option tcp_conn_req_max_q is the equivalent).
On solaris normally you have to extend the source port range (which is only 16k ports by default) and to reduce the time_wait interval which is set to 240 seconds by default, otherwise you quickly end up with no free port to establish outgoing connections. Unfortunately, I don't remember the parameter names right now, from memory it was in /dev/tcp, and the timeout was something like tcp_timewait_interval, and the ports might be in min_port and max_port.
You may also need to increase the max number of file descriptors if your servers behind haproxy take a long time to respond, in order to support a larger number of concurrent sockets. I remember something like "fdlim_cur" and "fdlim_max" in /etc/system which were not set by default. I recall you had to reboot after changing these ones, I don't know if this is still the case with solaris 10. Unfortunately, it's been years since I last did some tweaking there :-/
Hoping this helps!