The maximum value is dynamic, depending on what system limitation you hit first. For example, maximum number of open files per process, multiplied by the number of processes that fit in ram+swap. That's just one possible limit.
From a network perspective there are 2^16 or 65535 socket ports, on which 2^16 or 65535 unique clients could connect. So that means if nothing else was a limitation (hah) that from a TCP only perspective you could have up to 2^32 or 4294836225 unique connections.
Good luck with that. :)
Your question perhaps hints more about how you deal with lots of TCP connections on a host.
This is classically called the C10k problem. (10,000+ connections used to be a good water mark for when things fall apart.)
Here's a doc explaining how you can help scale a box to many thousands of connections.
The maximum value is dynamic, depending on what system limitation you hit first. For example, maximum number of open files per process, multiplied by the number of processes that fit in ram+swap. That's just one possible limit.
From a network perspective there are 2^16 or 65535 socket ports, on which 2^16 or 65535 unique clients could connect. So that means if nothing else was a limitation (hah) that from a TCP only perspective you could have up to 2^32 or 4294836225 unique connections.
Good luck with that. :)
Your question perhaps hints more about how you deal with lots of TCP connections on a host.
This is classically called the C10k problem. (10,000+ connections used to be a good water mark for when things fall apart.)
Here's a doc explaining how you can help scale a box to many thousands of connections.
http://www.kegel.com/c10k.html
It's a little dated, but my experience is that most Linux apps use epoll to help mitigate.
Cheers.