We are using NAT made with iptables. However when there are lots of connections (or sessions) through NAT server packages start to drop. There is no log in syslog etc. I think this is caused by sysctl configuration but I dont know how to fix or increase the amount of connections allowed.
Edit: Deeply sorry about not giving additional information. So information is below;
NAT is based on 64 Bits Ubuntu 12.04 LTS. On NAT server there are two interfaces, one has Public IP one has local IP. Local IP subnet is 255.255.0.0.
Edit 2: About "lots": I really have no idea about how much but I know that it is a stock version of Ubuntu. I mean we didnt tune anything yet.
Thanks
You haven't advised what "lots" of connections are, nor provided us hardware details,so its difficult to provide a definitive answer.
It is possible that you are running up against a hard limit of NAT, in as much as there are a limited number (about 64k) ports, and each connection needs its own port on a Linux implementation. You may be able to mitigate/overcome this by using multiple IP addresses (not sure if that is an option for you), or maybe using a transparent proxy if a lot of the connections are HTTP)
Also, what do you mean by "server packages" ? If you mean "packets", you may find the limit is not a NAT limit, rather it could be a limit of the size of your Internet connection or a limit in the number of packets the connected router can handle. [ More details might help us here as well ]
You do not say which system you are running iptables on.
If you are using FreeBSD it stores network packages in mbuf clusters.
You can tune the number of mbuf clusters with: sysctl kern.ipc.nmbclusters=65536
By setting it to 65000 you will use 144 MB of memory and you should not go above that without adjusting the address space.
Then this opens the question of address space in the kernel. On i386 the kernel memory is set to 1Gb. You can increase the to 2 Gb in the kernel configuration file:
On amd64 the default is 2 Gb and is not tunable.
You can increase the amount of physical memory (default 320 Mb) set in /boot/loader.conf
The above is a summary of the excellent report of Igor Sysoev which can be found here: http://rerepi.wordpress.com/2008/04/19/tuning-freebsd-sysoev-rit/
You will hit the above limits before the 64k ports limit. If you server is a really busy firewall that is not unlikely. If you are running Linux similar tuning may apply.
First answer was related to FreeBSD. The issues are the same but the parameters are different on Ubuntu:
Stock Ubuntu goes a long way. When you start to tune these parameters you might introduce more problems than you fix. From the lack of information you give my advice would be not to touch these parameters until you understand the networking parts a little better. And if you really have problems with some of these parameters you will usually get entries in your kernel log.
300 servers behind a NAT with only 600 rules is not huges numbers. If they are just a bunch of servers with little inbound and outbound traffic this should be a walk in the park. If any of the servers are doing heavy inbound or outbound traffic then things can turn sour pretty quickly.
The first problem to solve is how much is "lots". The numbers you have shown so far are not scary. And you should really confirm that you have big numbers before you start tweaking your network stack with sysctl. Be very very carefull with this.
To see if you have a large number of concurrent connections through the box you can do a:
Or to get even more statistics you can do
netstat -s
(Linux) ornetstat -m
(FreeBSD). This will tell you the currently used numbers. With that information in hand you can decide if you need to tune or not.The most common problem with busy NATs is
net.ipv4.netfilter.ip_conntrack_max
which governs how many connections you track. You can see how many entries are in the tracking table with:Check the current maximum you have set:
AFAIK default is 65536. You can increase this number as needed as long as you have memory for it.
davidgo mentions 64K ports. You can check the number of configured ephemeral ports using:
Maybe you need to increase the number of ports here.
It is interesting to know how quickly you discard closing sockets. This can leave hanging sockets and on a busy server you like to free them up more quickly:
You can free up ports quicker by decreasing thie value to 25 of maybe 10 seconds.
This as usually the ones to tune - but there are other relevant as well:
You can set all parameters using the
sysctl
command or edit/etc/sysctl.conf
You results may differ but to give you an idea the numbers below have been set on a reasonably busy server: