I've done a search and have not found anything addressing issues regarding patching and system updates. I've got guidelines that say servers need to have necessary patches. If I have a VM host then is that an extra layer to patch and update - even with bare metal hypervisors? As opposed to having a metal server? (ie more work and testing and documentation as per my guidelines).
How often do type 1/bare-metal hyper-visors get updated? Does that matter? Does the fact that it is an extra software layer introduce more complexity and risk (security & reliability)? (eg 99% bug free software x 99% bug free software = 98% bug free system)?
(My practical experience is with VMWare Workstation and Server, and VirtualBox.)
Yes, products like VMware should be patched sometimes (the updates are cumulative), but the patches come less frequently than a mainline operating system and the potential attack vector is smaller - your hypervisor should not be publicly-accessible.
I'll use VMware ESXi version 5.0 (not 5.1) as an example...
ESXi 5.0 has had the following update schedule:
Between 9/2011 and the present, there have been TEN updates to the ESXi 5.0 product. Out of those, SIX were security-focused updates rolled into the updates bundles with descriptions like:
"ESXi NFS traffic parsing vulnerability" - CVE-2012-2448.
These security vulnerabilities are real, as they sometimes mirror general Linux security bugs, but I think most organizations aren't very susceptible to the risks. It's up to the engineer to assess this risk, though. Would your users want massive downtime to fix the following exploit?
Maybe? Maybe not.
I run VMware's Update Manager, but only tend to update if I'm impacted by a bug or require a feature enhancement. When run in a clustered setup, patching is easy with no downtime to the running VM's. If no other pressing reasons exist, I'll just strive to update quarterly. Individual hosts will require a full reboot, since the patches are delivered as monolithic images.
As a side note, whenever I inherit a VMware ESXi setup or work on a system I don't normally manage, I often see hosts running that have never had any VMware patches applied. That is wrong!! But I can see how administrators could make that mistake once systems are up and running.
This is a pretty good question if you're new to virtualisation with 'bare metal' hosts. Doing things this way requires a different mindset to the approach you might take with hypervisors that run as a service/application on top of a conventional OS.
In my experience, it's probably fair to say that ESX and HyperV need less patching overall than conventional operating systems. This doesn't mean that they don't need patching at all, or that applying some of the patches wouldn't be beneficial regardless of "need", but this means that interuptions to your services to patch the host should be less frequent and more under your control. There is a potential security risk to the hypervisor OSes just as there is to any other, and while you can minimise the exposure of this risk (e.g. only exposing hypervisor management on an isolated VLAN that can't logically be reached from a compromised server) it would be foolish to pretend there's no risk at all.
So if you have 4 non-virtual servers, say, and you move them all to the same individual virtualised host, then yes you're increasing the amount of disruption that could be caused by a need to patch the host system (or deal with a hardware issue, etc, for that matter).
While I'd suggest the chance of this risk occuring is relatively low (I'm talking of the difference between patching a virtual host and the sort of patching that requires a restart that you'd have to do to a standalone system anyway), there's no getting away from the fact that the impact is high.
So why do we do it then?
The true benefit of virtualisation comes from being able to set up more than one host and configure the hosts to work together, allowing guests to be moved from one host to the other in the event that one host fails or that you wish to schedule patches on the host systems.
Using this approach I've managed to patch 5 ESX hosts in turn without any disruption at all to the 40 virtual servers running on top of them. This is simply a matter of economies of scale - once you have enough potential virtual guest machines to make it worthwhile to build this sort of complex setup and manage it with the sort of tools @ewwhite mentions in his answer, the payback in reducing the risks you're worried about arrives very quickly.
A virtual server will require the same maintenance and patches a physical server does, bare metal hypervisors will require updates, for security, but also to fix bugs, and improve performance. The more servers you have, the more work you will have to do to keep them up to date, it doesn't matter if they're physical or virtual.
Based on the above answers it seems: Virtualising a server has introduced more complexity and risk in security and reliability, but these needs to be weighed against the benefits of being able to reduce downtime by virtualising a server.
If your environment requires audit, tests and documentation, the cost-benefit of added workload of a virtualised environment, will have to be taken into account with the number of servers and systems staff you have. In our environment we don't have the staff/staff time to maintain the audit trail for a virtualised environment. In our business processes we can take some downtime, but we can not be missing an audit trail and documentation.