I have bonded two NICs(Intel I-350) on CentOS 6.4. The configuration looks fine, but I am not able to ping any host and switch in its subnet.
===Bond0 status===
cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
Bonding Mode: transmit load balancing Primary Slave: None Currently Active Slave: eth1 MII Status: up MII Polling Interval (ms): 80 Up Delay (ms): 0 Down Delay (ms): 0
Slave Interface: eth1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: xx:xx:xx:xx:xx:b9 Slave queue ID: 0
Slave Interface: eth2 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: xx:xx:xx:xx:xx:ba Slave queue ID: 0
===Interface===
bond0 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:B9 inet addr:192.168.100.2 Bcast:192.168.100.255 Mask:255.255.255.0 inet6 addr: fe80::225:90ff:fe95:cab9/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:6162 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:369234 (360.5 KiB)
eth1 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:B9 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:3106 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:185754 (181.4 KiB) Memory:dfb40000-dfb60000
eth2 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:BA UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:3056 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:183480 (179.1 KiB) Memory:dfb20000-dfb40000
===Message log when ifup bond0===
Apr 3 11:01:52 HOSTNAME kernel: bonding: bond0: Setting MII monitoring interval to 80.
Apr 3 11:01:52 HOSTNAME kernel: bonding: bond0: setting mode to balance-tlb (5).
Apr 3 11:01:52 HOSTNAME kernel: bonding: bond0: Setting MII monitoring interval to 80.
Apr 3 11:01:52 HOSTNAME kernel: bonding: bond0: setting mode to balance-tlb (5).
Apr 3 11:01:52 HOSTNAME kernel: bonding: bond0: Adding slave eth1.
Apr 3 11:01:52 HOSTNAME kernel: 8021q: adding VLAN 0 to HW filter on device eth1
Apr 3 11:01:52 HOSTNAME kernel: bonding: bond0: enslaving eth1 as an active interface with a down link.
Apr 3 11:01:52 HOSTNAME kernel: bonding: bond0: Adding slave eth2.
Apr 3 11:01:52 HOSTNAME kernel: 8021q: adding VLAN 0 to HW filter on device eth2
Apr 3 11:01:52 HOSTNAME kernel: bonding: bond0: enslaving eth2 as an active interface with a down link.
Apr 3 11:01:52 HOSTNAME kernel: ADDRCONF(NETDEV_UP): bond0: link is not ready
Apr 3 11:01:52 HOSTNAME kernel: 8021q: adding VLAN 0 to HW filter on device bond0
Apr 3 11:01:55 HOSTNAME kernel: igb: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
Apr 3 11:01:55 HOSTNAME kernel: bond0: link status definitely up for interface eth1, 1000 Mbps full duplex.
Apr 3 11:01:55 HOSTNAME kernel: bonding: bond0: making interface eth1 the new active one.
Apr 3 11:01:55 HOSTNAME kernel: bonding: bond0: first active interface up!
Apr 3 11:01:55 HOSTNAME kernel: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready
Apr 3 11:01:56 HOSTNAME kernel: igb: eth2 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
Apr 3 11:01:56 HOSTNAME kernel: bond0: link status definitely up for interface eth2, 1000 Mbps full duplex.
Apr 3 11:01:58 HOSTNAME ntpd[2338]: Listening on interface #8 bond0, fe80::225:90ff:fe95:cab9#123 Enabled
Apr 3 11:01:58 HOSTNAME ntpd[2338]: Listening on interface #9 bond0, 192.168.100.2#123 Enabled
I found the problem. After I changed the bonding mode from 5(balance-tlb) to 4(802.3ad), it works now.