This will probably be easier to explain with a diagram, so this is the basic setup:
There are 2 containers 10.0.0.1 & 10.0.0.2 running inside a host machine, and I've also created a linux bridge br0.
Each of the containers sees its own network interface as eth0, but the host machine sees it as vethxxxx. Looking at the picture in the basic setup, there is no connectivity between the containers.
What I can do easily is to add both veth interfaces into the bridge using brctl addif, and they will be able to communicate right away, but this is not exactly what I am looking to do.
What I want to do is to be able to have more granular control over the forwarding between the containers; to emulate a managed switch on br0 controlled by the host machine. If these were physical machines, I believe this could be done using nmcli with something like:
nmcli connection add type ethernet slave-type bridge con-name br0-port1 ifname <port name> master br0
and then connect the cables from the machines into the ports used in that command.
Is there a way to effectively achieve the same setup virtually (like in pic #3)?
You cannot "attach" a veth interface to a bridge port; you can make it a bridge port by adding it to a bridge.
A veth interface is effectively a virtual ethernet cable running between a pair of ports. The way you typically use them in the context of containers is that one end of the "cable" becomes
eth0
inside the container, and the other end is added as a bridge port, so you end up with:A linux bridge device does act like a managed switch in that you can configure vlans per port, control stp per port, etc.