I have a network of 100 machines.
Is there a limit to the number of machines that can connect to one single machine (at the same time)?
For example, can I have 99 of my machines maintain continuous ssh
connection to the 100th machine?
Can I have every one of my machines (every one of the 100) maintain a continuous ssh
connection to all other 99 machines?
How much memory does each such a connection take?
It will ultimately be limited by a number of factors - max open files, available memory and more - but 100 connections is not a huge amount.
To restrict the number of connection, use connlimit option in iptables.
If you want to measure how much memory each connection uses (on average) then start recording the number of connections and plot it against the free memory (less buffers+cache).
We use login servers to start lots of sessions to/from. The thing that maxes it out is the actually bash/shell sessions not the ssh connection per se. We use a very long "history" for each session ( history -a rules ) and that is what really chews up the memory. So no there should not be a limit but there could be practical limits due to running lots of shell sessions. An ssh connection itself is about half the size compared to bash (YMMV ) From a jump box using ps_mem.py ( a very handy memory usage sorter )
Private + Shared = RAM used Program
23.8 MiB + 13.4 MiB = 37.1 MiB sshd (17)
63.1 MiB + 1.6 MiB = 64.7 MiB bash (19)