on a server running Debian Stretch I configured a bond0 with the 802.3ad mode as following:
auto bond0
iface bond0 inet manual
slaves eth0 eth2
bond_miimon 100
bond_mode 802.3ad
The bond0 interface is up and and running but it's working with the load balancing (round robin) mode:
root@servir01:~# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 2
Permanent HW addr: e4:1f:13:65:f0:c4
Slave queue ID: 0
Slave Interface: eth2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 2
Permanent HW addr: e4:1f:13:36:a3:ac
Slave queue ID: 0
On the switch the LAG is correctly created with the LACP enabled and it has both ports up and running:
[
The same machine has another bond interface (bond1 on eth1 and eth3 interfaces) configured in the very same way, connected on the same switches, and the LACP is working good:
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: e4:1f:13:65:f0:c6
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 2
Actor Key: 9
Partner Key: 1010
Why the bond0 interface doesn't want to enable the LACP? Where I am wrong?
Old question, but since it comes up pretty early in searches, and I had a similar setup, with the same problem. Here's how I got it working, (using ifenslave on Debian stretch)...
/etc/network/interfaces...
What was the cause?
Well, the NICs would come up, the bonding driver would grab them, then the NICs would go down to reconfigure to be slaves, and the bonding driver would panic 'cos it had no slaves, and ran around like a headless chicken (round robin).
Now, the bonding driver comes up, sees that it has no slaves, so it sits back and waits... The NICs see that they have a master, so they go and report in, get their addresses from bond0, and off to work they all go.
tested on debian 10 (after reading this thread and the debian bonding documentation)
config is below (no other files edited - modules or something like that)
what's new:
Have spent few days after Debian 10 buster upgrade (full-upgrade) to Debian 11 Bullseye, so want to share the bonding issue solution.
After Debian Linux upgrade, existing trunk configuration is not working anymore. There are breaking changes, referred as bugs:
And previously on Debian 10 working bond0 configuration was like this:
which resulted in no bond0 configured or even errors like these:
or
showing error line
where 'stanza' is so called module configuration, term used by developers.
The root cause of that is the
ifenslave
package was refactored a lot, main idea was to remove the "stanza" from child items, which are physical interfaces (nic), and keep it all in one place, e.g. bond interface it self.Also even in
ifenslave
version 1.22 bug left, referring to nonexistent commandifstate
in Debian 11. Easy and quick fix is:Even after fixing this, bonding does not work, this means there are bugs of issues why bonding is not working on Bullseye.
Going through the code I found that the key change was not only to remove
bond-mode
from child and put it back to bond interface configuration, like it was in early package days, but also revert back to the early format ofbond-slaves
.Thus working Debian 11 Bullseye bonding configuration file looks like this:
Update 2022:
Recently on one of metal servers I got issue, that after kernel upgrade and removal of old kernel, system become networkless. Long story short - there can be case, that bonding kernel module is not loaded, not present, or any loading failure because of version mix up, or initrd mess-up. Check that with:
If it's not there, this is a culprit of the problem. Try to load module manually
modprobe bonding
and check if it loads. Investigate does loaded kernel version corresponds to what is supposed to beuname -r
and check if modules directory is present for that version.Reference: https://www.kernel.org/doc/Documentation/networking/bonding.txt
I solved this problem adding the following to the bond configuration in
/etc/network/interfaces
:After adding this configuration and restarting networking everything is working well.