I am currently examining the performance (especially UDP throughput) of different Docker overlay networks. I do this by creating point-to-point connections between two hosts that are connected with a Docker overlay network and then run iperf
inside the Docker containers to examine the throughput. I noticed that everytime I am running iperf
as a client to send data to the other container that runs iperf
as a server, the CPU usage of the client host reaches 100%. I got that result by running the following command I found on here:
top -bn1 | grep "Cpu(s)" | \
sed "s/.*, *\([0-9.]*\)%* id.*/\1/" | \
awk '{print 100 - $1"%"}'
So, to me it seems that the limiting factor of my throughput tests is the CPU capacity of my host, since it runs at 100% and is not able to generate more traffic to saturate the network connection. I am wondering if this is an iperf
specific issue so I wanted to run the same tests with a different tool but am not sure which alternative would be best. The hosts are running Ubuntu. For example, I found qperf
, uperf
and netpipe
.
Also, more generally, I started to wonder what normally is the bottleneck for throughput performance. Isn't it always either the CPU capacity or the bandwidth of the link? Which are factors that are not directly related to the overlay networks.
Does that mean that the throughput of an application (or overlay network) just depends on how many CPU cycles it needs to transfer a certain amount of data and how it compresses it to fit it through the network (if that would be the bottleneck).
UDP is both CPU and bandwidth bound. It sends packets without guaranteeing that they are sent, transmitted nor received.
Generally speaking, UDP performances are meaningless. Nothing prevents you to try to send 1 bazillion packets a second. That saturates the sender CPU and the network, while the receiver might not get much of anything.
If you really want to test UDP, that is a rather long topic that is worthy of a book. For starter, you need to monitor error rates and what data is actually sent/received.
You should test with TCP to measure the available bandwidth between hosts.
iperf
should be able to do that just fine.