I have a server with a Gigabit Uplink, and testing with iperf3
with 100 parallel connections i get at least 600 MBit/s, depending on the other server (i tried some public test servers).
But when i run iperf3
with one connection i get 10-15 MBit/s
, with two 20-30 MBit/s
and so on.
I do not have very complicated iptables rules and no other idea, why it is so slow. What can be the limiting factor for single tcp connections, that they are 10 times slower than the possible bandwidth?
I finally found the cause of the problem.
I had some flask webapp, which uses redis to stream events to the user. When the user disconnected, the app kept the redis pubsub connection alive, without reading data anymore.
This leads to a long
Send-Q
/Recv-Q
, which apparently cause the tcp stack to slow down and produce kernel warnings: "TCP: out of memory -- consider tuning tcp_mem".Single TCP sessions are limited by the size of each sessions' window, which represents the maximum number of bytes that can be "in flight" between the two endpoints at any given time. So if you have high latency on your link, you can reach a per-session limit which is windowSize / RTT.
The only way around this (since you usually can't do a whole lot about RTT) is either to use more sessions, or to significantly increase the window size using window scaling. I don't know what iPerf's settings are relative to that, or if you might have a firewall or other filter between the endpoints that prevents the scaling even if iPerf and your server both support it.