I can't seem to find an existing question like my scenario. I'm using 2 public bridges for 4 VMs. Two VMs per bridge device. Interface stats show there is use of the assigned bridge for the VM on inbound traffic, but outbound is going through only the first bridge.
# cat /etc/network/interfaces
# The loopback network interface
auto lo
iface lo inet loopback
pre-up iptables-restore < /etc/firewall-rules
auto br0
iface br0 inet dhcp
bridge_ports eth0
bridge_stp off
bridge_maxwait 0
bridge_fd 0
auto br1
iface br1 inet dhcp
bridge_ports eth1
bridge_stp off
bridge_maxwait 0
bridge_fd 0
# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.a4badb4e3949 no eth0
vnet0
vnet2
br1 8000.a4badb4e394a no eth1
vnet1
vnet3
# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default vlan-200.mydoma 0.0.0.0 UG 0 0 0 br0
XXX.YYY.200.0 * 255.255.248.0 U 0 0 0 br0
XXX.YYY.200.0 * 255.255.248.0 U 0 0 0 br1
The routing makes sense with what I'm seeing: all outbound traffic, unless for the 200 subnet, will use br0.
How can it be configured so that the KVM guests on br1 have their outbound traffic truly use br1 for their gateway? The current set up works fine, but I'd prefer to assign the bridges to VMs as their own full gateway so br1 is better utilized for TX packets.
If the interfaces are on the same subnet, bond them. If they're not, unless the VMs have routes that traverse the second bridge, they're never going to use it. It looks like they are on the same subnet, in which case, follow this guide:
https://wiki.debian.org/Bonding