I have a setup where squid sits in front of a java server and acts as a reverse proxy. Recently i've load tested the site and if i fire 100 threads at it each making a request using jmeter i start getting errors in my load test tool like 'no route to host' even though the load test tool and the server are on the same machine.
if i run the following command where port 82 is the port my squid server is running on:
netstat -ann | grep 82 | wc -l
i get 22000 or something and most of them are in TIMED_WAIT. i'm thinking that maybe the huge number of sockets in the TIMED_WAIT state are starving the box of resources.
A handy answer from StackOverflow:
https://stackoverflow.com/questions/760819/is-there-a-limit-on-number-of-tcp-ip-connections-between-machines-on-linux
There are a variety of different limits that can affect the operation of your Web proxy. As sysadmin1138 mentioned, TCP connections is one of them.
Another, as Kyle managed to post before me, is the file descriptors. SQUID's default, at least with 2.6, is 1024. To increase this limit, you have to recompile increasing
--with-maxfd
. Even after recompiled with higher FD, theulimit
will be in effect for the user starting squid. For example to increase resource limit to 8192 run this before starting squid:ulimit -HSn 8192
Linux defaults to pretty high on the kernel level these days, so you likely won't have to tune outside of SQUID for the FD. If you provide log output, chances are it will indicate the exact issue, and we could provide more detailed recommendations.
It could be too many open files With unix, "everything is a file", this includes sockets. You either need to increase the max open files with
ulimit
for the user, or possibly in the kernel as well (/proc/sys/fs/file-max
). You could also play with the amount of time spent in TIME_WAIT with/proc/sys/net/ipv4/tcp_fin_timeout
If you're using Squid 2.7, you can set the number of fd's available without recompiling; http://www.squid-cache.org/Doc/config/max_filedescriptors/