Background Information
I have a server with two network interfaces that is running Docker. Docker, like some virtualization tools, creates a Linux bridge interface called docker0
. This interface is configured by default with an IP of 172.17.42.1
and all Docker containers communicate with this interface as their gateway and are assigned IP addresses in the same /16
range. As I understand it, all network traffic to/from containers goes through a NAT, so outbound it appears to come from 172.17.42.1
, and inbound it gets sent to 172.17.42.1
.
My Setup looks like so:
+------------+ /
| | |
+-------------+ Gateway 1 +-------
| | 10.1.1.1 | /
+------+-------+ +------------+ |
| eth0 | /
| 10.1.1.2 | |
| | |
| DOCKER HOST | |
| | | Internet
| docker0 | |
| (bridge) | |
| 172.17.42.1 | |
| | |
| eth1 | |
| 192.168.1.2 | \
+------+-------+ +------------+ |
| | | \
+-------------+ Gateway 2 +-------
| 192.168.1.1| |
+------------+
The Problem
I want to route all traffic from/to any Docker containers out of the second eth1
192.168.1.2
interface to a default gateway of 192.168.1.1
, while having all traffic from/to the host machine go out the eth0
10.1.1.2
interface to a default gateway of 10.1.1.1
. I've tried a variety of things so far to no avail but the one thing that I think is the closest to correct is to use iproute2 like so:
# Create a new routing table just for docker
echo "1 docker" >> /etc/iproute2/rt_tables
# Add a rule stating any traffic from the docker0 bridge interface should use
# the newly added docker routing table
ip rule add from 172.17.42.1 table docker
# Add a route to the newly added docker routing table that dictates all traffic
# go out the 192.168.1.2 interface on eth1
ip route add default via 192.168.1.2 dev eth1 table docker
# Flush the route cache
ip route flush cache
# Restart the Docker daemon so it uses the correct network settings
# Note, I do this as I found Docker containers often won't be able
# to connect out if any changes to the network are made while it's
# running
/etc/init.d/docker restart
When I bring up a container I cannot ping out from it at all after doing this. I'm uncertain if bridge interfaces are handled the same way physical interfaces are for this sort of routing, and just want a sanity check as well as any tips on how I might accomplish this seemingly simple task.
A friend and I ran into this exact problem where we wanted to have docker support multiple network interfaces servicing requests. We were specifically working with the AWS EC2 service where we were also attaching/configuring/bringing up the additional interfaces. In this project, there is more than what you need so I will try to only include what you need here.
First, what we did was create a separate route table for
eth1
:Next we configured the mangle table to set some connection marks coming in from
eth1
:Finally we add this rule for all
fwmark
s to use the new table we created.The below
iptables
command will restore the connection mark and then allow the routing rule to use the correct routing table.I believe this is all that is needed from our more complex example where (like I said) our project was attaching/configuring/bringing up the
eth1
interface at boot time.Now this example would not stop connections from
eth0
from servicing requests through todocker0
but I believe you could add a routing rule to prevent that.You might have to look more on the iptables setup also. Docker masquerades all traffic originating from the container subnet, say 172.17.0.0/16, to 0.0.0.0. If you run
iptables -L -n -t nat
, you can see the POSTROUTING chain under nat table which does this -Now, you can remove this rule and replace it with a rule which masquerades all traffic originating from the containers' subnet to your second interface's IP - 192.168.1.2, as that is what you desire. The delete rule will be, assuming that it is the first rule under POSTROUTING chain -
Then you add this custom rule -
The masquerade isn't from 172.17.42.1 but rather
Which means this rule won't work right.
Try instead