I run a container. It has all the capabilities and mounts all the directories from root (except /proc
). When I call lvcreate
from inside of it I get:
# lvcreate -v -L104857600B -n vol1 default
Finding volume group "default"
Archiving volume group "default" metadata (seqno 17).
Creating logical volume vol1
Creating volume group backup "/etc/lvm/backup/default" (seqno 18).
Activating logical volume "vol1".
activation/volume_list configuration setting not defined: Checking only host tags for default/vol1
Creating default-vol1
Loading default-vol1 table (252:4)
Resuming default-vol1 (252:4)
And the command hangs. I also get this in logs:
Sep 12 12:03:31 node3 systemd-udevd[12529]: Process '/sbin/dmsetup udevcomplete 23072978' failed with exit code 1
If I interrupt it with ctrl-C I set, that the logical volume was created. I can also interrupt the command by issuing dmsetup udevcomplete_all
from inside the same container. If I call lvcreate
on the host it works normally and exits cleanly.
I believe, that this problem has something to do with udev cookies not being shared between container and host. I have no idea however, what exactly is lvm trying to do here and how to fix the problem.
I need this so that containerised kubelet can call flexvolume plugin that would be able to allocate logical volumes.
I'm gonna say that you should avoid using
udev
in this case. This is easy enough to do with LVM, and LVM is entirely capable of handling volumes and device settling on its own.Within your
/etc/lvm/lvm.conf
file, you will find these lines:As I have printed them there, set those values to zero and see if that clears it up. It will at the very least rule out
udev
. You will have to bring your volumes offline to make this change, as you will be switching fromudev
management to LVM management.