I'm experimenting with lxc/lxd in Vagrant, but i'm quite new to it. I managed to create running container, but I cannot ping anything (including 8.8.8.8) from inside of it. I can ping its IP from my top-level non-virtual system, but it refuses SSH connections. I can enter the container only directly from the direct container's host (Vagrant) by using lxc exec my-container /bin/bash
.
I tried to setup my container in the routed
mode, and I still want it, for the learning purposes. The LXD/LXC documentation seems to be somewhat lacking though.
I tried to follow this instruction: https://blog.simos.info/how-to-get-lxd-containers-get-ip-from-the-lan-with-routed-network/ but it didn't work for me in the end. I could miss something, because I'm not well versed in the linux networking yet.
My Vagrant host is running on Ubuntu 20.04
.
My LXC container is running on Debian 10
.
LXC configuration on my Vagrant host:
config:
core.https_address: '[::]:8443'
core.trust_password: true
networks: []
storage_pools:
- config:
source: /home/luken/lxd-storage-pools
description: ""
name: default
driver: dir
profiles:
- name: default
config: {}
description: ""
devices:
root:
path: /
pool: default
type: disk
- name: mail-server
config:
user.network-config: |
version: 2
ethernets:
eth0:
addresses:
- 192.168.33.11/32
nameservers:
addresses:
- 8.8.8.8
search: []
routes:
- to: 0.0.0.0/0
via: 169.254.0.1
description: Mail Server LXD profile
devices:
eth0:
ipv4.address: 192.168.33.11
nictype: routed
parent: eth1
type: nic
cluster: null
ip addr
in my Vagrant host:
luken@luken-tech-test:~$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:be:4a:e8 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
valid_lft 76347sec preferred_lft 76347sec
inet6 fe80::a00:27ff:febe:4ae8/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:65:e6:28 brd ff:ff:ff:ff:ff:ff
inet 192.168.33.2/24 brd 192.168.33.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe65:e628/64 scope link
valid_lft forever preferred_lft forever
6: vetha8400046@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether fe:48:28:3e:e4:fa brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 169.254.0.1/32 scope global vetha8400046
valid_lft forever preferred_lft forever
inet6 fe80::fc48:28ff:fe3e:e4fa/64 scope link
valid_lft forever preferred_lft forever
ip addr
in my container:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 9a:14:96:30:67:43 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.33.11/32 brd 255.255.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::9814:96ff:fe30:6743/64 scope link
valid_lft forever preferred_lft forever
ip r
in my Vagrant host:
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
192.168.33.0/24 dev eth1 proto kernel scope link src 192.168.33.2
192.168.33.11 dev vetha8400046 scope link
ip r
in my container:
default via 169.254.0.1 dev eth0
169.254.0.1 dev eth0 scope link
Is there anything I missed (probably a lot)?
There are several things are required to make your scheme work:
Check the forwarding on the host interfaces:
ip -4 netconf show dev vetha8400046
andip -4 netconf show dev eth0
. You should see the enabled forwarding in the output. Otherwise enable it with thesysctl -w net.ipv4.conf.<iface>.forwarding=1
command. Add the corresponded strings into the/etc/sysctl.conf
file to make it persistent.Check the routing on the host:
ip -4 route get 8.8.8.8 from 192.168.33.11 iif vetha8400046
. You should see the valid route through your default gateway (... via 10.0.2.2 dev ...
. Check also the reverse route with theip -4 route get 192.168.33.11 from 8.8.8.8 iif eth0
.Ping the container from host and vice versa.
Setup the NAT on the host:
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
. Ping some external host from the container. The counters of this rule in theiptables-save -c
should be non-zero.Ask the question if you stuck.
This is the correct setup for having Debian 10 containers accessible in our local network by their own static IPs, and them having access to the internet.
Our
Vagrantfile
:Notice that we are setting up "public" network, it means that when starting up Vagrant, you will be asked to choose interface to use, choose the one that you are using to connect to your local network.
Notice also that we are setting up only a single IP for now, this (192.168.1.200) will be our host's IP. We are not setting up container's IP here.
After starting up our host, enable IP forwarding in its configuration, by uncommenting:
net.ipv4.ip_forward=1
in/etc/sysctl.conf
and restarting sysctl by executingsystemctl restart systemd-sysctl
.Now, assuming that you installed and set up
LXD
correctly on the host, you can initlxd
with the following configuration:Change [path-to-storage-pools-directory] into actual path to the directory where you are going to store your pools.
Note that we are not using cloud-init to set up networking, as it would collide with Debian's networking service making it fail to start.
Note also that we are using cloud-init to set up nameserver to use by our guest.
192.168.1.201 will be our guest's IP. This setup assumes that the local network is 192.168.1.0, if your network is different, change IPs accordingly. Also make sure that the IPs used here are not used by anything else in the network already.
We are using two profiles here but it's up to you how you organize your configuration.
Now let's run our container:
It should be running, and be available in the local network, but it won't have access to the internet, and that's the tricky part. This is caused by routing set up by the
lxd
based on the assumption that default gateway is the one to the vagrant/virtualbox network. We need to fix that.When executing
ip r
on the host, you should see the list of routes - something like that:The issue is with the default one, which is routing traffic into the vagrant/virtualbox network instead of our actual gateway. Let's remove it:
And let's add the rule that will forward packets to the actual gateway of our local network:
(assuming that 192.168.1.1 is IP of our local gateway)
Now the traffic from the container will be going to the correct gateway, and we are done!