On an AWS EC2 instance I like to host LXC containers as kind of virtual servers. I created a bridge (br0) containing only eth0 and gave it the private ip of my VPCs subnet. I reconfigured LXC not to use lxcbr0 as bridge, but my br0 device.
When I add a new container and assign it an IP address of my VPCs subnet, I can reach the container from the lxc host. I can also reach the lxc host from within the container. But every other address can not be reached although in the same subnet.
Bridge configuration:
auto br0
iface br0 inet static
bridge_ports eth0
bridge_fd 2
bridge_stp off
address 10.8.0.11
netmask 255.255.255.0
network 10.8.0.0
broadcast 10.8.0.255
gateway 10.8.0.1
dns-nameservers 8.8.8.8 8.8.4.4
VPC NIC was set to "Disable Source/Dest. check"
ip_forwarding is set to 1
no iptables rules existent
eth0 is set to promiscuous mode (ip link set eth0 promisc on)
lxc containers are being correctly associated with my bridge
In a hardware only environment, as well as on a VirtualBox environment this setup worked. However, on AWS it does not.
Bridging won't work, VPC doesn't isn't a layer 2 network and all ips need to be assign through the ec2 API. Your best bet is to use a totally separate (non-conflicting) subnet and have the host route traffic to the lxc containers Then update your VPC route tables with a static route to this subnet through your ec2 instances NIC. This is how openVPN works.
According to the answer of Nath I've put the LXC containers into their own net and routed the traffic between the networks. Now it works!