What is recommended way of migrate from kvm to vmware in case of lvm based guest with multipathing? I found that similar questions were already asked few years ago:
How to migrate KVM based VMs running in LVM setup to Vmdk images
converting KVM virtual machines to VMware-vsphere
But the problem with vCenter converter is that base on documentation linux volumes mounted by device mapper multipath aren't supported. What is currently the proper way to proceed in case of multipath environment?
VMware converter can migrate from any source machine regardless of the source type. (virtual/physical/KVM/Hyper-V) The only trouble is VMware converter can't migrate software RAID or LVM. So the solution for this is to create a skeleton server with the bare minimum of the source machine and push everything with the tar command from the source server.
I had to use this solution, when I was migrating quite a few racks bare metal server to VMware and some had softraid or LVM installed.
Steps to follow for this:
1: Create your target vm box
2: Install a minimum version of the same system that your source has (network, ssh server and tar must be available)
3: Create a list of directorys we don't want to include
boot proc dev sys etc/fstab etc/lvm etc/blkid mnt/yourexternalhdd
save it under /tmp/nocopy
4: Take a snapshot of your target in case something goes wrong
5: SSH to your source and as root: cd /; tar -zcvpf - -X /tmp/nocopy * |ssh target "cd /; tar -zxvpf - --numeric-owner" 6: Reset target.
E.G.:
tar -zcvpf - -X /tmp/nocopy * | ssh [email protected] "cd /; tar -zxvpf - --numeric-owner"
In order to convert the existing disk images to VMware’s vmdk format you should you use the program qemu-img from the package qemu-utils (in Ubuntu).
The process is straight-forward
Transfer disk image to ESXi (using scp (enable ssh in ESXi)) or NFS
Create new virtual machine with custom options and add the converted disk
Boot
If you have LVM volumes the fix of UUID will be tricky. So therefore here is some extra tweaks for you guys.
Create the skeleton machine as it was before, exactly the same as the source box. Then boot this machine with any kind of RescueCD, Ubuntu, Debian, CentOS, Rocky Linux you name it, don't matter, use your system as the the source is.
Then boot up the skeleton machine with the rescueCD and then connect the source box with this:
ssh user@host "sudo -S dd if=/dev/sdS bs=4M" | dd of=/dev/sdT status=progress
sdS is the source disk, you will get this info with fdisk -l In XEN this is most likely is /dev/xdva
and then the destination disk is this: /dev/sda if the destination system is ESXi. You can get this info with fdisk -l.
You also will need the user in the sudo. So add the user into sudoers file here: /etc/sudoers with this:
migrationusername ALL=(ALL:ALL) NOPASSWD: ALL
This is it. With this, you can migrate any Linux. Only issue with this is the image size, you cannot migrate thin, it will pull the whole image. So 100GB is 100GB.
But the only thing you will need to fix after the process finished is the ethernet adapter name, nothing else.
Ethernat adapter name will be either ens32 or ens192 instead of eth0. You can get the real name with "ifconfig -a". In Ubuntu this will be either in /detc/netplan/00-balblabla config file or if it is older than 16 then /etc/networking/interfaces.
CentOS: /etc/sysconfig/network-scripts/ifcfg-eth0 Change the ifcfg-eth0 to ifcfg-ens32 or whatever ifconfig -a says.
Also on CentOS you might need to fix /etc/udev/rules.d/70-persistent-net.rules file with the correct MAC address. CentOS and Rocky Linux change this file to mach the correct MAC address.
So rem out the old MAC and add the new MAC with the new ens32 or 192 or whatever address you have. After this you need to reboot the box otherwise it won't pick up the new MAC.