We have an Ubuntu 10.04 server running KVM very nicely, but are having trouble figuring out the cleanest (and fastest) way to do unattended installs of 10.04 guests.
Requirements:
Must use LVM volumes for the guests' storage (without some qemu-img conversion or some such)
Must use virtio for network and disk storage (preferably without hacking in the XML files)
Must use a local mirror - so it's quick (e.g. <5 minutes)
Really really want it to be fully automated and non-interactive. (i.e. kick it off and have a functional system running a few minutes later)
Would like to be able to specify the ip address at initiation, so it's easy to get to without looking at the dhcp server.
Would like to be able to specify different flavors / distros / versions, etc.
Option 1: We don't like doing this from virt-manager UI -- because you have to be on the physical server (not using virt-manager remotely) to install to a LVM partition. This does work, but you have to be running VNC and Gnome on the server and that's not cool. Plus it's interactive and you have to click a lot of options, and we'd still want to write wrapper scripts that do this anyway.
Option 2: vmbuilder from the python-vm-builder package seems like it's exactly what we want - because you can specify a local mirror (using apt-proxy for this) but haven't been able to get it to use LVM volume, nor use virtio for the disk.
vmbuilder kvm ubuntu --suite=lucid --flavour=virtual --arch=amd64 --mirror=http://192.168.1.1:9999/ubuntu -o --libvirt=qemu:///system --ip=192.168.1.94 --part=vmbuilder.partition --raw=/dev/VG0/LVtest --templates=mytemplates --firstboot=/root/vm/boot.sh --user=linuxadmin --name=linuxadmin --pass=secretpass --mem=256 --hostname=test --bridge=br0
This just ignores that --raw= part and creates a qcow image file -- and doesn't use virtio. I suspect I could convert the image file to a LVM volume and manually add the virtio stuff in the XML, but that seems annoying and messy.
Option 3: This is what we're using, but it's not optimal -- because it's kludgey, doesn't allow us to specify the IP address, and we really can't dynamically control (from the install command / script) some of the parameters that are hardcoded in the kickstart file.
Manually create a Logical Volume -- then...
# virt-install --connect qemu:///system -n test -r 1024 --vcpus=1 --disk path=/dev/VG0/LVvm-test model=virtio --pxe --vnc --noautoconsole --os-type linux --os-variant virtio26 --accelerate --network=bridge:br0 --hvm
This goes to a PXE boot server, which points to a local install server with a kickstart file that does a lot of the configuration. It DOES use virtio for both network AND disk, so that's good. It DOES use a local mirror and LVM, so that means it meets our minimal requirements, but we would like it to be 100% automated. Right now, you have to connect to the VNC console (via virt-manager) and select "Install" on the Lucid installer -- so that breaks the fully automatic thing. And of course, you have to look in the syslog to see what IP address it got so you can ssh to the box.
Surely we're not the ONLY ONES that want this functionality!!!
in RHEV, you simply deploy VMs from a ready template, which is basically either a copy of the original template image or a snapshot from it (to save space)
I'm pretty sure that if you have a golden image of your build with everything in it, you can set it up to be a template, and your scripts will just clone it using qemu-img or even dd.
The template should have the VM-specific details, like ssh keys stripped (sys-unconfig in RHEL/Fedora, no idea how to do that in Ubuntu),so when a VM deployed from it is started, you get that data regenerated.
I'd go with Option 2, but you will have to hack vmbuilder for that, because it doesn't use mount-by-UUID (and thus, even if you change libvirt XML template to use virtio, vmbuilder will hardcode /dev/sd* into /etc/fstab);
--raw
is supposed to work, though -- please file a bug for that.Personally, I'm thinking of using a combo of virt-install (to create VM and initialize its disk) and PXE-booting debian-installer in auto mode with preseed file.
How about giving Cobbler ( https://fedorahosted.org/cobbler/ ) a try ?
Ive already made such system. I was using PXE boot with preseed file , puppet and sabayon. As a resoult the system was installed with root passwd, some users, fully updated user profile, ssh, openoffice and others.
There was problem with sabayon and firefox.
BUT! all this takes about 5-10 mins to deploy on my system.
Another way was to just make some clones with dd and update them with puppet.
Its pretty easy to take your 3rd option, and make the lucid installer autostart instead of waiting for you to hit enter.
In our case I remaster the ISO and boot from it. Just change the isolinux.cfg to have a default label. Doing it for PXE is basically the same thing.
Then you can specify anything (including IP!) using a preseed file.
https://help.ubuntu.com/10.04/installation-guide/i386/preseed-using.html