I have written a script that generates a root filesystem image (based on Debian lenny) suitable for User-Mode Linux. (Essentially this script creates a filesystem image, mounts it with a loop device, uses debootstrap to create a lenny install, sets up a static IP for TUN/TAP networking, adds public keys for login by SSH and installs a web application.)
These filesystem images work pretty well with UML, but it would be nice to be able to generate similar images that people can use on alternative virtualization software, and I'm not familiar with these options at all. In particular, since the idea is to use this image as a standalone server for testing the web application, it's important that the networking works.
I wonder if anyone can suggest what would be involved in customizing such root filesystem images such that they could be used with other virtualization software, such as VMware, Xen or as an Amazon EC2 instance? Two particular concerns are:
If such systems don't use a raw filesystem image (e.g. they need headers with metadata or are compressed in some particular way) do there exist tools to convert between the different formats?
I assume that in the filesystem, at least
/etc/network/interfaces
will have to be customized, but are more involved changes likely to be necessary?
Many thanks for any suggestions...
VirtualBox has "VBoxManage convertdd" to import raw disk images (generated with dd) into its own special .vdi format. I believe there are similar things for VMWare etc.
The various virtualisation products often have some kernel extensions (like the VirtualBox Guest Extensions) which enable the guest operating system to co-operate with the host in various ways, which it's often helpful to have installed.
The Open Virtualization Format (OVF) seems to be gathering support - see http://en.wikipedia.org/wiki/Open_Virtualization_Format for an introduction.
Unlike OP, I'm using Centos7 and the primary hypervisor is HyperV. That said, most of this will apply to other distros and hypervisors.
I have systemd-networkd configured to default to DHCP on any connected interfaces, and I haven't had problems with network configuration. While I like the power of systemd-networkd, for DNS it requires systemd-resolved - which departs significantly from traditional resolver behavior. I'm sticking with it... for now.
What I have had trouble with is kernel modules. First, many modules for various hypervisors weren't installed. Second,
dracut
(which builds the initramfs on centos and some other distros) looks at the current hardware and only includes modules which work with that hardware. It's possible to force it to include other modules via--add-drivers
, but you must know the exact file names for the modules. For any given hypervisor, you'll probably need virtual storage and virtual networking modules, and perhaps a balloon memory driver, etc.When crafting arguments for dracut, note the distinction between a
dracut module
and akernel module
.The list of dracut options can be found at http://man7.org/linux/man-pages/man8/dracut.8.html#OPTIONS; I ended up with something similar to the following (note, I'm not sure whether the kernel module names are accurate - and if they are they're only for HyperV):
dracut --kver 0 -f --xz -a "busybox mdraid" -o "bootchart dash plymouth btrfs dmraid fcoe-uefi iscsi nbd biosdevname" --no-kernel --add-drivers "hv_storvsc hv_vmbus hv_utils"
Another answer suggests the OVF format. IIRC OVF import into HyperV requires download of a plugin which needs a very old .NET version... in other words, it's painful if HyperV will be the most frequently used hypervisor for your vm's.
Fortunately, VHDs are importable by many hypervisors and easily created by qemu-img from raw, qcow2, etc.
One another way, might be attaching partition directly.
In qemu it's just at command line.
At VirtualBox you have to follow steps like described in VirtualBox user manual chapter 9 - subsection "Advanced storage configuration" : http://www.virtualbox.org/manual/ch09.html#id504534 what looks more or less like this:
of course /dev/sda can be changed to any block device, especially partition (maybe on LVM2).
On VMWare there is also possibility of attaching partition or disk directly.
Thanks to that, in such configuration you can even switch from one VM to another, using the same installation.
Are you aware of this root filesystem site? All the filesystems on there were originally developed for use with UML, but they should work with any virtualization solution. Note however that there is no bootloader installed as the images are made of a single loop mounted disk (without a partition table). You can still boot them with kvm using its
-kernel
command line option, for the others you will need to boot another image (recovery cd/image perhaps?) and install the bootloader yourself. Obviously you may have to convert this raw format into whatever format you need (vdi/vmware/..) using the relevant tools. The scripts are included should you want to create the filesystems yourself.