I am currently in the process of migrating a server running several linux containers to a server managed by proxmox. In the past when i moved a linux container to a different host i just used the LXD API, simplestreams protocol and executed a lxc copy
command - quite simple. But how is it done if the remote is managed by proxmox so that the migrated container is known to proxmox afterwards?
I'm spinning up lightweight containers on a Linux Host using LXD/LXC.
The sole purpose of these containers is to host "Dotnet & Dotnet core apps"
For a while I've been using Ansible, but recently I found that I could actually embed an init script into the user data of the container configuration, and cloud-init would execute it.
This is great, and allows me to set up a given container with exactly the packages it needs except for one problem.
Microsoft
(I know, I know... save the jokes and slurs :-D)
Unlike most 3rd party package providers, MS package their entire addition of their deb source and GPG key in a standalone dpkg package file, this package file is not listed via the normal repos, so it basically has to be "wget" downloaded and then installed using a regular dpkg command.
Right now, this is how I'm doing things:
#cloud-config
# apply updates using apt
package_update: true
package_upgrade: true
# set hostname
hostname: ****
fqdn: ****
manage_etc_hosts: true
# Install 3rd party software repos
# NOTE: This is done using run command due to the way microsoft distribute things using a raw dpkg
runcmd:
- [wget, "https://packages.microsoft.com/config/ubuntu/20.04/packages-microsoft-prod.deb", -O, /root/packages-microsoft-prod.deb]
- dpkg -i /root/packages-microsoft-prod.deb
- rm /root/packages-microsoft-prod.deb
- apt update
- apt-get install dotnet-sdk-3.1 -y
- apt-get install dotnet-sdk-5.0 -y
# Install standard packages
packages:
- apt-transport-https
- python3
- python-is-python3
- mc
- gnupg
- nginx
- git
# Add users
users:
- name: ****
ssh-authorized-keys:
- ssh-rsa **** rsa-key-BLAH
sudo: ['****']
groups: sudo
shell: /bin/bash
final_message:
- "Container initialisation complete."
The key part is the "runcmd" section.
Because I'm using "runcmd" this runs AFTER everything else including the normal package install part where I put in all the standard packages I need to use.
What I would ideally LIKE to do, is to install the dpkg file, then just add the package names to be installed in the normal package part, for example
# Something here to download and install the dpkg
# Install standard packages
packages:
- apt-transport-https
- python3
- python-is-python3
- mc
- gnupg
- nginx
- git
- dotnet-sdk-3.1
- dotnet-sdk-5.0
I did try ONLY putting that bit in the runcmd, but because it runs as the very last step, it causes the packages part to fail, due to not having the repo for dotnet installed.
I also tried using the "Apt" module, to install the "microsoft-prod.list" into "/etc/apt/sources.list.d" but that also failed, because MS don't publish their GPG key, and addition of the source causes a fail when an apt update is performed due to it being an untrusted source.
I've scoured the module docs for cloud-init, and I can't find anything that seems to suggest a regular dpkg file can be downloaded and added, hence why I'm asking here :-)
We currently run our services in a series of LXD containers - we have one running a nginx server as a reverse proxy, pointing at each service, and where I need to connect to a container from the host or another container, I use the LXD assigned hostname. It works very well, and its extremely clean.
I'm currently looking at setting up grafana for monitoring in a container and hook in a prometheus instance in the host to feed it data. It would probably be helpful to be able to refer to the LXD host by a hostname - like I do for the containers for this and other projects. What would be the 'correct' way to refer to the host from a service running on a container?
I observed that the command su
takes too long (30 seconds) when it is executed in a LXD Debian container that is nested into an Ubuntu LXD container. This overhead does not occur Debian containers that are not nested, nor in Ubuntu containers nested into an Ubuntu container. Does anyone have any explanation for this? Next I describe how to reproduce this issue.
Setup
I have set up LXD containers in an Ubuntu 18.04 machine to use nested containers (as is described in https://ubuntu.com/blog/nested-containers-in-lxd). I use the system apt packages (lxd
and lxd-client
) to install LXD. Then, I created two containers as follows:
lxc launch ubuntu:20.04 c1 -c security.nesting=true
lxc launch images:debian/10 c2
Then, inside container c1, I created two nested containers:
lxd init
lxc launch ubuntu:20.04 c3
lxc launch images:debian/10 c4
In the Debian containers, I created non-root user debian
with the following command:
adduser --home /home/debian --gecos Debian --disabled-password debian
In Ubuntu containers is not needed to create a non-root user because the ubuntu
user is already defined.
Measuring how long su
takes
For each container I measure the time required for the command su
with the following commands (where user
is either ubuntu
or debian
):
time pwd
time su $user -c pwd
It is expected that the second command takes more time due to the overhead of the su
command. In all cases this overhead is around 59 milliseconds, except for container c4
(i.e., for the Debian container inside the Ubuntu container) where the overhead is around 30 seconds.
I'm experimenting with lxc/lxd in Vagrant, but i'm quite new to it. I managed to create running container, but I cannot ping anything (including 8.8.8.8) from inside of it. I can ping its IP from my top-level non-virtual system, but it refuses SSH connections. I can enter the container only directly from the direct container's host (Vagrant) by using lxc exec my-container /bin/bash
.
I tried to setup my container in the routed
mode, and I still want it, for the learning purposes. The LXD/LXC documentation seems to be somewhat lacking though.
I tried to follow this instruction: https://blog.simos.info/how-to-get-lxd-containers-get-ip-from-the-lan-with-routed-network/ but it didn't work for me in the end. I could miss something, because I'm not well versed in the linux networking yet.
My Vagrant host is running on Ubuntu 20.04
.
My LXC container is running on Debian 10
.
LXC configuration on my Vagrant host:
config:
core.https_address: '[::]:8443'
core.trust_password: true
networks: []
storage_pools:
- config:
source: /home/luken/lxd-storage-pools
description: ""
name: default
driver: dir
profiles:
- name: default
config: {}
description: ""
devices:
root:
path: /
pool: default
type: disk
- name: mail-server
config:
user.network-config: |
version: 2
ethernets:
eth0:
addresses:
- 192.168.33.11/32
nameservers:
addresses:
- 8.8.8.8
search: []
routes:
- to: 0.0.0.0/0
via: 169.254.0.1
description: Mail Server LXD profile
devices:
eth0:
ipv4.address: 192.168.33.11
nictype: routed
parent: eth1
type: nic
cluster: null
ip addr
in my Vagrant host:
luken@luken-tech-test:~$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:be:4a:e8 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
valid_lft 76347sec preferred_lft 76347sec
inet6 fe80::a00:27ff:febe:4ae8/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:65:e6:28 brd ff:ff:ff:ff:ff:ff
inet 192.168.33.2/24 brd 192.168.33.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe65:e628/64 scope link
valid_lft forever preferred_lft forever
6: vetha8400046@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether fe:48:28:3e:e4:fa brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 169.254.0.1/32 scope global vetha8400046
valid_lft forever preferred_lft forever
inet6 fe80::fc48:28ff:fe3e:e4fa/64 scope link
valid_lft forever preferred_lft forever
ip addr
in my container:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 9a:14:96:30:67:43 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.33.11/32 brd 255.255.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::9814:96ff:fe30:6743/64 scope link
valid_lft forever preferred_lft forever
ip r
in my Vagrant host:
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
192.168.33.0/24 dev eth1 proto kernel scope link src 192.168.33.2
192.168.33.11 dev vetha8400046 scope link
ip r
in my container:
default via 169.254.0.1 dev eth0
169.254.0.1 dev eth0 scope link
Is there anything I missed (probably a lot)?