Okay, this is creeping me out - I see about 1500-2500 of these:
root@wherever:# netstat
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 localhost:60930 localhost:sunrpc TIME_WAIT
tcp 0 0 localhost:60934 localhost:sunrpc TIME_WAIT
tcp 0 0 localhost:60941 localhost:sunrpc TIME_WAIT
tcp 0 0 localhost:60947 localhost:sunrpc TIME_WAIT
tcp 0 0 localhost:60962 localhost:sunrpc TIME_WAIT
tcp 0 0 localhost:60969 localhost:sunrpc TIME_WAIT
tcp 0 0 localhost:60998 localhost:sunrpc TIME_WAIT
tcp 0 0 localhost:60802 localhost:sunrpc TIME_WAIT
tcp 0 0 localhost:60823 localhost:sunrpc TIME_WAIT
tcp 0 0 localhost:60876 localhost:sunrpc TIME_WAIT
tcp 0 0 localhost:60886 localhost:sunrpc TIME_WAIT
tcp 0 0 localhost:60898 localhost:sunrpc TIME_WAIT
tcp 0 0 localhost:60897 localhost:sunrpc TIME_WAIT
tcp 0 0 localhost:60905 localhost:sunrpc TIME_WAIT
tcp 0 0 localhost:60918 localhost:sunrpc TIME_WAIT
tcp 0 0 localhost:60921 localhost:sunrpc TIME_WAIT
tcp 0 0 localhost:60673 localhost:sunrpc TIME_WAIT
tcp 0 0 localhost:60680 localhost:sunrpc TIME_WAIT
[etc...]
root@wherever:# netstat | grep 'TIME_WAIT' |wc -l
1942
That number is changing rapidly.
I do have a pretty tight iptables config so I have no idea what can cause this. any ideas?
Thanks,
Tamas
Edit: Output of 'netstat -anp':
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:60968 127.0.0.1:111 TIME_WAIT -
tcp 0 0 127.0.0.1:60972 127.0.0.1:111 TIME_WAIT -
tcp 0 0 127.0.0.1:60976 127.0.0.1:111 TIME_WAIT -
tcp 0 0 127.0.0.1:60981 127.0.0.1:111 TIME_WAIT -
tcp 0 0 127.0.0.1:60980 127.0.0.1:111 TIME_WAIT -
tcp 0 0 127.0.0.1:60983 127.0.0.1:111 TIME_WAIT -
tcp 0 0 127.0.0.1:60999 127.0.0.1:111 TIME_WAIT -
tcp 0 0 127.0.0.1:60809 127.0.0.1:111 TIME_WAIT -
tcp 0 0 127.0.0.1:60834 127.0.0.1:111 TIME_WAIT -
tcp 0 0 127.0.0.1:60872 127.0.0.1:111 TIME_WAIT -
tcp 0 0 127.0.0.1:60896 127.0.0.1:111 TIME_WAIT -
tcp 0 0 127.0.0.1:60919 127.0.0.1:111 TIME_WAIT -
tcp 0 0 127.0.0.1:60710 127.0.0.1:111 TIME_WAIT -
tcp 0 0 127.0.0.1:60745 127.0.0.1:111 TIME_WAIT -
tcp 0 0 127.0.0.1:60765 127.0.0.1:111 TIME_WAIT -
tcp 0 0 127.0.0.1:60772 127.0.0.1:111 TIME_WAIT -
tcp 0 0 127.0.0.1:60558 127.0.0.1:111 TIME_WAIT -
tcp 0 0 127.0.0.1:60564 127.0.0.1:111 TIME_WAIT -
tcp 0 0 127.0.0.1:60600 127.0.0.1:111 TIME_WAIT -
tcp 0 0 127.0.0.1:60624 127.0.0.1:111 TIME_WAIT -
EDIT: tcp_fin_timeout DOES NOT control TIME_WAIT duration, it is hardcoded at 60s
As mentioned by others, having some connections in
TIME_WAIT
is a normal part of the TCP connection. You can see the interval by examining/proc/sys/net/ipv4/tcp_fin_timeout
:And change it by modifying that value:
Or permanently by adding it to /etc/sysctl.conf
Also, if you don't use the RPC service or NFS, you can just turn it off:
And turn it off completely
TIME_WAIT is normal. It's a state after a socket has closed, used by the kernel to keep track of packets which may have got lost and turned up late to the party. A high number of TIME_WAIT connections is a symptom of getting lots of short lived connections, not nothing to worry about.
It isn't important. All that signifies is that you're opening and closing a lot of Sun RCP TCP connections (1500-2500 of them every 2-4 minutes). The
TIME_WAIT
state is what a socket goes into when it closes, to prevent messages from arriving for the wrong applications like they might if the socket were reused too quickly, and for a couple of other useful purposes. Don't worry about it.(Unless, of course, you aren't actually running anything that should be processing that many RCP operations. Then, worry.)
Something on your system is doing a lot of RPC (Remote Procedure Calls) within your system (notice both source and destination is localhost). That's often seen for lockd for NFS mounts, but you might also see it for other RPC calls like rpc.statd or rpc.spray.
You could try using "lsof -i" to see who has those sockets open and see what's doing it. It's probably harmless.
tcp_fin_timeout
does NOT controlTIME_WAIT
delay. You can see this by using ss or netstat with -o to see the countdown timers:even with tcp_fin_timeout set to 3 the countdown for TIME_WAIT still starts at 60. However if you have net.ipv4.tcp_tw_reuse set to 1 (
echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse
) then the kernel can reuse sockets in TIME_WAIT if it determines there won't be any possible conflicts in TCP segment numbering.I had the same problem too. I cost me several hours to find out what is going on. In my case, the reason for this was that netstat tries to lookup the hostname corresponding to the IP (I assume it's using the gethostbyaddr API). I was using an embedded Linux installation which had no /etc/nsswitch.conf. To my surprise, the problem only exists when you are actually doing a netstat -a (found this out by running portmap in verbose and debug mode).
Now what happened was the following: Per default, the lookup functions also try to contact the ypbind daemon (Sun Yellow Pages, also known as NIS) to query for a hostname. To query this service, the portmapper portmap has to be contacted to get the port for this service. Now the portmapper in my case got contacted via TCP. The portmapper then tells the libc function that no such service exists and the TCP connection gets closed. As we know, closed TCP connections enter a TIME_WAIT state for some time. So netstat catches this connection when listing and this new line with a new IP issues a new request that generates a new connection in TIME_WAIT state and so on...
In order to solve this issue, create a /etc/nsswitch.conf which is not using the rpc NIS services i.e. with the following contents: