When creating the virtual machine on server A, virt-install created a libvirt-volume "VM" in the libvirt storage-pool "POOL".
virsh-install --disk pool=POOL,size=$HDDSIZE,$DISKOPT -n VM
As expected, libvirt shows e.g.
# virsh vol-list POOL
Name Path
-----------------------------------------
VM /dev/VOLUMEGROUP/VM
anotherVM /dev/VOLUMEGROUP/anotherVM
When migrating the virtual machine to server B, which does not share storage with server A, I instead created the logical volume with lvm:
lvcreate -l $EXTNSIZE -n lvmVM VOLUMEGROUP
As not expected (but arguably expectable), libvirt on server B does not recognize lvmVM.
# virsh vol-list POOL
Name Path
-----------------------------------------
someOtherVM /dev/VOLUMEGROUP/someOtherVM
I was able to manually start VM anyway. Fine. But...
- Are there dangers/disadvantages to leave libvirt in a blind state about the new volume inside POOL like this?
- e.g. will libvirt fail to auto-restart the VM on reboot etc.?
(or, from the other perspective: What is the gain in using libvirt volumes instead of just lvm volumes inside a storage pool, after the initial creation via e.g. virt-inst?)
- Is there a way to make libvirt recognize the volume?
- ... but would this recognition/conversion process write metadata into lvmVM and thus corrupt the VM virtual disk?
Details:
- On both servers, the lvm group "VOLUMEGROUP" is used for libvirt storage pool "POOL".
- On both servers, lvm shows the expected volume. Just the libvirt volume-"layer" is different.
- Debian Linux 8 x86_64, lvm2 2.02.111-2.2+deb8u1, libvirt 1.2.9-9+deb8u3
- size is 40 GB, DISKOPT='bus=virtio,cache=writethrough,io=threads,sparse=false'
- beneath libvirt, KVM and qemu are used.
libvirt xml for virtual machine "VM" has:
disk type='block' device='disk' driver name='qemu' type='raw' source dev='/dev/VOLUMEGROUP/VM' target dev='vda' bus='virtio'
0 Answers