As I understand it, bonding brings among other benefits the ability to increase the network speed between two machines in a LAN.
Bonding [...] means combining several network interfaces (NICs) to a single link, providing either high-availability, load-balancing, maximum throughput, or a combination of these.
Source: Ubuntu documentation, emphasis mine.
I have bonding configured on two servers; both have two 1Gbps NIC adapters. When testing speed between those servers using iperf
, the report indicates:
- 930 to 945 Mbits/sec when using
balance-rr
bonding mode. - 520 to 530 Mbits/sec from machine A to B when using
802.3ad
, - 930 to 945 Mbits/sec from machine B to A when using
802.3ad
.
An interesting thing is that when using 802.3ad
, ifconfig
indicates that practically all RX
is on eth0
(2.5 GB vs. a few KB/MB) and all TX
on eth1
on machine A, and the inverse on machine B.
When asking iperf
to use multiple connections (iperf -c 192.168.1.2 -P 10
), the obtained sum is very close to the results displayed when using a single connection.
Two machines are connected to a Netgear GS728TS which has LACP configured properly (I hope), with two LAGs covering two ports each. IEEE 802.3x mode is enabled.
Is iperf
suited well for this sort of tests? If yes, is there something I'm missing?
Bonded interfaces do not grant additional bandwidth to individual network flows. So if you're only running one copy of iperf then you will only be able to use one network interface at a time. If you have two NIC in a lagg then you'll need at least two completely independent copies of iperf running on the computer to see any simultaneous utilization. This will apply to actual loads as well - eg a Samba client will still only see 1Gb throughput, but two clients could each see 1Gb if your lagg has two NICs. This all assumes you have the lagg configured to use both NICs (The 802.3ad option will do this).
After contacting Netgear support, it appears that:
Source: Netgear support ticket response
The same ticket response links to Netgear's public forum post, where we can read that:
For those who don't want to read the entire forum discussion, here are the key points:
There should be at least two clients connecting to the server to benefit from LACP. A single client will use one link only, which will limit its speed to 1 Gbps.
Two clients should be using different links to benefit from LACP.
With only two network adapters on the server, there is a 50% chance of getting the same link from two clients, which will result in total speed capped at 1 Gbps. Three network adapters decrease the chance down to 33%, four—to 25%.
To conclude, there is no way with Netgear GS728TS to obtain a 1.4 to 1.8 Gbps speed between two machines.
This Q&A was very helpful for me to understand bonding with LACP but there is no concrete example how to verify a throughput of about 1.8Gb/s. For me it was important to verify this so I will share how I have tested it.
As @ChrisS noted in his answer it is important to have completely independent copies of iperf running. To achieve this I connect to the lacp-server with two clients. On the lacp-server I use screen to run independent instances of iperf in two screen windows/sessions. I also ensure to have independent data streams by using different ports for each connection. My switch with bonding LACP to the server is a TP-LINK T1600G-52TS. All devices uses Debian 10 (Buster). The two test clients are connected to a port of the switch. First I started iperf in server mode on the lacp-server two times within screen and then executed on the clients at the same time (using ssh):
Here are the results on the lacp-server for the first connection:
and for the second connection:
Together this is a Bandwidth of 855Mb/s + 906Mb/s = 1.761Mb/s.
@ArseniMourzenko noted in his answer:
I have repeated the test more than 10 times to verify this but always get a Bandwidth of about 1.8Gb/s so I cannot confirm this.
The statistics of the interfaces shows that its usage is balanced:
With three test clients I get these results:
References:
Link Aggregation and LACP basics
LACP bonding and Linux configuration
Linux Ethernet Bonding Driver HOWTO
RedHat - Using Channel Bonding