I work for a software company. My department is responsible for (among other things) building and distributing VMWare virtual machines to members of our sales team, who then launch them using VMWare Player to run their product demonstrations for clients.
Lately, it occurred to me that the way we update and distribute these VMs is all kinds of wrong. Here's our process for updating "Demo VM:"
- Download a fresh copy of the VM(~35 GB) from the central server
- Set it to persistent mode, then launch it and make the changes such as upgrading products to latest versions and updating licenses
- Once changes are done, shut it down and set it back to non-persistent mode, then upload the whole thing(~35 GB) back to the central server with a new folder name with incremented version number
- Whoever needs the latest version then downloads it from the fileserver(35 GB * X)
Not only does this take up a lot of network bandwidth, but downloading 35 GB of stuff from the network can be time-consuming, especially for people in our remote offices who don't have the luxury of intranet speeds.
My question: is there a better way of managing the update and distribution of virtual machines that need to be run locally on the users' machines?
The reason I started questioning our current method is that, when a virtual machine is updated, only a small portion of the files (VMEMs and virtual disk images) change, right? So instead of copying the entire VM folder, there should be a method to upload/download only the deltas, so to speak. Similar to how version control systems like Git work. I actually tried to use Git for this, but it turns out Git is terrible when it comes to managing huge files. So I figured I'd ask here.
Rsync would work well for this. If you want to distribute diffs to machines that don't have direct access to the server you might also try xdelta.
Triggering updates to run inside the VMs could be an option, too, but you'd have to be careful that an impatient user didn't damage the VM if they interrupted it while it was updating. I'd go the route of patching the VM disk files, personally.
In my opinion, an appropriate answer depends on the target operating system, as the available tools differ greatly.
There can be an interesting turn in this workflow that can improve the process by making it reproducible, but also flexible. Let me try to explain how. This task, as you have described it (and if I have understood it properly), is based on building a golden image offline, and letting the sales department staff cloning it.
(It is not clear from the information you gave whether this staff should be able to modify the golden image or just use for demonstration purposes as it is distributed, that could spawn a ramification that I'm not considering below).
So, in order to give at least a partial answer, these are my
assumptions:
debian-rules
orspec
files paired with anautotools
flowfpm
cobbler
server. Can manage different services, but the interesting ones would be TFTP, PXE, kickstarting, preseeding, i.e., the provisioning step. Alternatively,pulp
can also distribute repositories (not only when requested by a client, but also actively from the server).puppet
,ansible
,salt
, ..., i.e., the configuration step.and some use cases:
- full-blown GNU/Linux virtual machines
If you can complain with the assumptions above, there are quite a few ways to ensure that an end user can have a virtual machine with the exact configuration and installed software you have previously decided. One of them would involve using
vagrant
. Using this software, you only need to modify one file (thevagrantfile
to describe the type of machine you want to build. Moreover,vagrant
can also handle the provisioned machine to your configuration management systeem of choice. The online documentation is pretty good, and there are plenty of examples online.The sales staff machines could sport any OS, as the only requirement is that they install
vagrant
on the host machine. Spawning a Demo VM would only take a simplevagrant up
.There are interesting alternatives to
vagrant
, as well. Check, for example,packer
.- proposal based on containers, not full-blown virtual machines
If the sales staff machines can use any GNU/Linux operating system, you could also take advantage of containers, a way of running virtualized operating systems with little overhead. The more interesting ways (in my opinion) of using this technology include, but are not limited to:
libvirt
,docker
andLXC
. Docker has this concept of adockerfile
, similar in functionality to thevagrantfile
, and more interestingly, there is a registry that can host your distributable images.Containers can operate as simple services in the hosting operating system, so using them is pretty simple.
- do without an operating system
To help improving the process of distribution, one should ensure that only the minimum required software is installed, of course. But there are ways you could do without an operating system at all. If your use case can benefit from using software like
supermin
, an appliance could be as small as a few megabytes to a gigabyte.Others have proposed a different approach, without a hosting operating system, but this model does not seem to fit what you described.