Until recently I was using Proxmox 4.0 and this was my procedure for converting a VMWare VM to Proxmox
- Create a working VM. Uninstall VMWare Tools
- Mount the ProxMox
- Drivers ISO and copy the necessary drivers to the C:\PVE folder
- Start the Windows virtual machine on VMware and execute the File Mergeide.reg.
- Make sure Atapi.sys, Intelide.sys, Pciide.sys, and Pciidex.sys are in the %SystemRoot%\System32\Drivers folder.
- Shutdown Windows.
Then prepare the VMDisk using the vdiskmanager
"C:\Program Files\VMware\VMware Server\vmware-vdiskmanager" -r disk0.vmdk -t 0 disk0-pve.vmdk
Then convert the vmdk to a qcow2 file
qemu-img convert -f vmdk disk0-pve.vmdk -O qcow2 disk0-pve.qcow2
Now in Proxmox create the VM using the same hardware spec as the VM in VMWare. Then rename the disk0-pve.qcow2 to vm-VID-disk-1.qcow2
Then upload the qcow2 file to /var/lib/vz/images/VID
Run the VM and sorted…
Now here’s my problem.
Just upgraded to Proxmox 4.4 and I believe it all changed from version 4.2 onwards
My Proxmox server installation created to storage areas (local & local-lvm) When you create a VM the disks are placed on the local-lvm storage. The /etc/pve/qemu-server/VID.conf file shows the location as local-lvm:vm-VID-disk-1
So my first question is this.
1) What do I do with my .qcow2 file? I can’t upload it to /var/lib/vz/images as that’s empty and I have no idea how to navigate to local-lvm (I’m assuming you can’t as its a lvm)
2) How do I get the vm-VID-disk-1.qcow2 file (I created above) to local-lvm:vm-VID-disk-1?
Other questions…
On the old Proxmox 4.0 I used to switch off the VM and download the qcow2 as a backup (I know I can snapshot) but the qcow2 file was for off site emergencies.
3) So how do I get the local-lvm:vm-VID-disk-1 copied to a vm-VID-disk-1.qcow2?
Of course the other problem is the "created" local storage was small in size (decided by the proxmox installer). It decided on 200Gb.
4) However one of the disks (qcow2 file) is 500Gb so how do I get that to the local-lvm?
Of course on version 4.0 it wasn't a problem because it was all one storage area under "/" and I could upload and download the qcow2 files via SFTP.
I could put the disk on a USB Disk and mount it maybe?
What's your thoughts on mounting to a SMB share from my PC that has the qcow2 file? Does proxmox even support SMB mounting? or will I need to install the debian packages? if so will that break the proxmox and its performance as a hypervisor?
Sorry lots of questions :-)
That's a workaround for importing vmdk to proxmox vm. And that is what I've done.
1) You can as its as a lvm. You can look at https://www.howtoforge.com/linux_lvm about lvm. Brief structure;
So:
you can see it
lvdisplay
you can mkfs, but careful on proxmox
mkfs.ext4 /dev/vgname/lvname
you can mount it
mount /dev/vgname/lvname /mnt/lvname
2) You must first check vmdk image size. This is important because you can not put a larger image on a smaller block device. details: https://askubuntu.com/questions/657562/extracting-qcow2-image-to-a-smaller-real-drive/657682
qemu-img info disk0.vmdk
You can see disk size, it is vmdk file's disk usage. And virtual size, it is vm's harddisk size.
Then, you create a vm with 82G "ide" disk (in my case)
You can check disk path with:
lvdisplay
Then you copy vmdk over it in raw format:
qemu-img convert -p -O raw disk0.vmdk /dev/vgname/vm-111-disk-1
That's it, works for me.
For size problem, you can resize your vmdk with
qemu-img resize your.vmdk 82G
or you can resize vm's disk size on web interface. But you MUST checkqemu-img info
to be sure vmdk smaller than vm's disk.You can upload files with SFTP or use USB as you can see above. Use different directories or disks, or mount them if it's necessary.
You can use SMB on proxmox, smbclient is already installed I guess. This is not going to break proxmox. Maybe security issues, you can check open ports on proxmox, and limit them for some IP subnets.
I'm going through something similar. I had some vm images running on Open Media Vault. OMV supports regular plain vanilla KVM using tools like
virsh
. I wanted to move the images to my new server running Proxmox 6.1.First I exported the image from OMV as a
qcow2
file and moved it to my Proxmox server. Then in Proxmox I created a "Directory" style storage. Directory storage is tied to an underlying directory on the physical file system - in my case it was a location/data
. When the "Directory" storage is created, a number of subdirectories are automatically created in the file system directory. The directory that holds the vm images is calledimages
. There is also a directory created calledvm-images
so don't get confused.I created a new VM in Proxmox, specifying the "Directory" storage and using a qcow2 format. That created a file
/data/images/$VM_ID/vm-$VM_ID-disk-0.qcow2
where$VM_ID
is the numeric identifier of the VM, which usually starts with 100 and goes up. I used the exportedqcow2
file to overwrite that one.That works and I was able to start the VM but apparently using the
qcow2
file on the filesystem isn't all that efficient. So then in the web ui I went to the VM's hardware pane, selected the hard disk, and then used the "Move disk" button. This allowed me to copy the disk image to another storage medium, and it also converted it from aqcow2
file to the raw format. I could use the "Delete source" checkbox when doing the move, or I could delete the source from the/data/images/$VM_ID/
directory later. This move can even be done while the VM is running.While I was doing this, by the way, some of my VMs became all but unresponsive. It was my own fault. First, the data directory is made up of a mirrored pair of Western Digital Red 3TB drives. The WD Red drives are designed for long life and reliability in file servers. They are conventional 5200 rpm drives so they are comparatively slow. Second, I had an rsync job writing about 1.5TB of data to the mirrored pair at the same time. Those drives couldn't handle the I/O of the rsync job and hosting the virtual drives for several VMs at the same time. The "I/O Delay" in the pve node summary panel in the web ui showed values greater than 50%. I/O Delay is the same as wait time that one would normally see with
top
. When I moved the VM image storage to different devices everything was fine.My other Proxmox mistake is that one of the vms I moved was a Linux "desktop" (Linux Mint) that uses the Cinnamon desktop environment. The Mint project basically takes Ubuntu Linux and combines it with a well polished Cinnamon for a very nice Linux Desktop experience. But Cinnamon is a dog if you don't have a GPU. The OMV "server" was an older gaming PC that I got second hand. It has a NVIDIA GeForce GTX 560. It wasn't a good GPU, but Cinnamon doesn't need a good one, just any at all. The Proxmox server was a real server. It didn't have a GPU at all.
I couldn't even log into the VM, through the console. I used ctrl-alt-f3 to get to the command line, ran
sudo apt install mint-meta-mate
to install the mate desktop, then ransudo update-alternatives --config x-session-manager
to select Mate as the default. That works fine even without a GPU.