What kernel parameter or other settings control the maximum number of TCP sockets that can be open on a Linux server? What are the tradeoffs of allowing more connections?
I noticed while load testing an Apache server with ab that it's pretty easy to max out the open connections on the server. If you leave off ab's -k option, which allows connection reuse, and have it send more than about 10,000 requests then Apache serves the first 11,000 or so requests and then halts for 60 seconds. A look at netstat output shows 11,000 connections in the TIME_WAIT state. Apparently, this is normal. Connections are kept open a default of 60 seconds even after the client is done with them for TCP reliability reasons.
It seems like this would be an easy way to DoS a server and I'm wondering what the usual tunings and precautions for it are.
Here's my test output:
# ab -c 5 -n 50000 http://localhost/
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 5000 requests
Completed 10000 requests
apr_poll: The timeout specified has expired (70007)
Total of 11655 requests completed
Here's the netstat command I run during the test:
# netstat --inet -p | grep "localhost:www" | sed -e 's/ \+/ /g' | cut -d' ' -f 1-4,6-7 | sort | uniq -c
11651 tcp 0 0 localhost:www TIME_WAIT -
1 tcp 0 1 localhost:44423 SYN_SENT 7831/ab
1 tcp 0 1 localhost:44424 SYN_SENT 7831/ab
1 tcp 0 1 localhost:44425 SYN_SENT 7831/ab
1 tcp 0 1 localhost:44426 SYN_SENT 7831/ab
1 tcp 0 1 localhost:44428 SYN_SENT 7831/ab
I finally found the setting that was really limiting the number of connections:
net.ipv4.netfilter.ip_conntrack_max
. This was set to 11,776 and whatever I set it to is the number of requests I can serve in my test before having to waittcp_fin_timeout
seconds for more connections to become available. Theconntrack
table is what the kernel uses to track the state of connections so once it's full, the kernel starts dropping packets and printing this in the log:The next step was getting the kernel to recycle all those connections in the
TIME_WAIT
state rather than dropping packets. I could get that to happen either by turning ontcp_tw_recycle
or increasingip_conntrack_max
to be larger than the number of local ports made available for connections byip_local_port_range
. I guess once the kernel is out of local ports it starts recycling connections. This uses more memory tracking connections but it seems like the better solution than turning ontcp_tw_recycle
since the docs imply that that is dangerous.With this configuration I can run ab all day and never run out of connections:
The
tcp_max_orphans
setting didn't have any effect on my tests and I don't know why. I would think it would close the connections inTIME_WAIT
state once there were 8192 of them but it doesn't do that for me.You really want to look at what the /proc filesystem has to offer you in this regard.
On that last page, you might find the following to be of interest to you:
I don't think there is a tunable to set that directly. This falls under the category of TCP/IP tuning. To find out what you can tune, try 'man 7 tcp'. The sysctl ( 'man 8 sysctl' ) is used to set these. 'sysctl -a | grep tcp' will show you most of what you can tune, but I am not sure if it will show all of them. Also, unless this has changed, TCP/IP sockets open up look like file descriptors. So this and the next section in that link might be what you are looking for.
Try setting the following as well setting tcp_fin_timeout. This should close out TIME_WAIT more quickly.
The stock apache(1) used to come predefined to only support 250 concurrent connections - if you wanted more, there was one header file to modify to allow more concurrent sessions. I don't know if this is still true with Apache 2.
Also, you need to add an option to allow loads of more open file descriptors for the account that runs Apache - something that the previous comments fail to point out.
Pay attention to your worker settings and what sort of keepalive timeouts you have inside Apache itself, how many spare ones servers you have running at once, and how fast these extra processes are getting killed.
You could reduce the time spent in the TIME_WAIT state (Set net.ipv4.tcp_fin_timeout). You could replace Apache with YAWS or nginx or something similar.
Tradeoffs of more connections generally involve memory usage, and if you have a forking process, lots of child processes which swamp your CPU.
The absolute number of sockets that can be open on a single IP address is 2^16 and is defined by TCP/UDP, not the kernel.
The Apache HTTP server benchmarking tool, ab, in the 2.4 version has the -s timeout option. See also ab (Apache Bench) error: apr_poll: The timeout specified has expired (70007) on Windows.
This option solves your problem.