There are a few questions that I've found on ServerFault that hint around this topic, and while it may be somewhat opinion-based, I think it can fall into that "good subjective" category based on the below:
Constructive subjective questions:
* tend to have long, not short, answers
* have a constructive, fair, and impartial tone
* invite sharing experiences over opinions
* insist that opinion be backed up with facts and references
* are more than just mindless social fun
So that out of the way.
I'm helping out a fellow sysadmin that is replacing an older physical server running Windows 2003 and he's looking to not only replace the hardware but "upgrade" to 2012 R2 in the process.
In our discussions about his replacement hardware, we discussed the possibility of him installing ESXi and then making the 2012 "server" a VM and migrating the old apps/files/roles from the 2003 server to the VM instead of to a non-VM install on the new hardware.
He doesn't perceive any time in the next few years the need to move anything else to a VM or create additional VMs, so in the end this will either be new hardware running a normal install or new hardware running a single VM on ESXi.
My own experience would lean towards a VM still, there isn't a truly compelling reason to do so other than possibilities that may arise to create additional VMs. But there is the additional overhead and management aspect of the hypervisor now, albeit I have experienced better management capabilities and reporting capabilities with a VM.
So with the premise of hoping this can stay in the "good subjective" category to help others in the future, what experiences/facts/references/constructive answers do you have to help support either outcome (virtualizing or not a single "server")?
In the general case, the advantage of putting a standalone server on a hypervisor is future-proofing. It makes future expansion or upgrades much easier, much faster, and as a result, cheaper. The primary drawback is additional complexity and cost (not necessarily financially, but from a man-hours and time perspective).
So, to come to a decision, I ask myself three questions (and usually prefer to put the server on a hypervisor, for what it's worth).
I think the operating system being virtualized is a big factor, along with performance requirements and potential for expansion/growth. Today's servers are often excessively powerful for the applications and operating systems we use. In my experience, most standard Windows systems can't make efficient use of the resources available in a modern dual-socket server. With Linux, I've leveraged some of the granular resource management tools (cgroups) and containers (LXC) to make better use of physical systems. But the market is definitely geared toward virtualization-optimized hardware.
That said, I've virtualized single-systems rather than bare-metal installs in a few situations. Common reasons are:
Licensing - The dwindling number of applications that license based on rigid core, socket or memory limits (without regard to the trends in modern computing). See: Disable CPU cores in bios?
Portability - Virtualizing a server abstracts the VM from the hardware. This makes platform changes less disruptive and allows the VM to reference standard virtualized devices/components. I've been able to keep decrepit (but critical) Windows 2000 systems on life-support using this approach.
Future expansion - I have a client now who has a Windows 2003 domain controller running on 2001-era hardware. I'm building a new single-host ESXi system for them which will house a 2012 R2 new domain controller for the interim. But more VMs will follow. In this configuration, I can offer reliable resource expansion without additional hardware costs.
The downsides of doing this with a single host/single VM is management. I'm coming from the VMware perspective, but in the past, ESXi was a bit friendlier to this arrangement. Today, the requirement of the vSphere Web Client and restricted access to basic features, makes running a single-host (and single-VM) solution less attractive.
Other considerations are crippled hardware monitoring and more complexity involved with common external peripherals (USB devices/tape drive/backups/UPS solutions). Today's hypervisors really want to be part of a larger management suite.
There are a few benefits to virtualizing a single server. The first few things to come to mind are
I think the most important of those would be the snapshot capabilities. We use VMWare all over in our company, so for us it would make sense to have the server "ready" for when there's a need for more VMs.
This is not a long answer, but anyway:
The most compelling reason to use a hypervisor for a single server, especially with something like Windows Server is that you have total hardware abstraction for the production OS and can just move it to completely new server hardware without any problem, should the need arise. I consider this a real valuable feature that by far outweighs the drawbacks of having an practical unnecessary hypervisor running in the background.
I'm not going to provide as detailed an answer here as others have so I'll just say that I'm finding it harder and harder these days to justify installing the server OS on bare metal as opposed to installing a hypervisor (of your choice) and virtualizing the workloads. The advantages to doing this, in my mind, are:
Cost benefit. In the long run, if I need to deploy additional workloads I don't have to shell out for more hardware for those additional workloads. In some cases, when using Hyper-V, I may even save on my licensing costs.
Ease of deployment and redeployment.
Ease of implementing high availability and failover.
Portability. I can likely move the VM just about anywhere if I need to decommission or outsource the current host.
Future proofing. Your fellow sysadmin may not currently see any future need for a hypervisor based infrastructure but my guess is that within 12 to 24 months he will and he'll be glad he chose to go down the virtualization route, if he does in fact choose that route.
Disaster recovery. I can backup an entire VM and restore it or replicate it to another host in a matter of minutes.
And so on and so on...
Here are a few reasons why I would say a VM is better:
Built-in "KVM over IP" (sort of) - you can access your server remotely on the console without needing an KVM over IP. Sometime you just don't want to do something over RDP and need console access. With a VM, you fire up the management tool of choice (XenCenter, vSphere Client, etc) and you're on the console of your VM.
With VMs (and for non-VM servers, with my KVM over IP) I no longer have to stay in my cold server room for hours.
Migration to new hardware - OS upgrade aside, to put in your new hardware you have to migrate the system, move things around, etc. With a VM, you don't (usually) have to do anything. You upgrade your hardware, put the VM files on the new hardware and fire up.
While one does not foresee future VM, "if you build it, they will come". You'll want to spin up a new VM to test something and try new stuff, etc. There is just so much more possibilities.
VMs give you the ability to revert with snapshot, take a copy of it, make a clone of the VM (at run time) and then spin it up - whether to test something before putting it live, or just to have a second of the first. There are many things you can do with VM snapshots and the likes.
Redundancy - if you throw in a second VM server, you can have redundant hardware and while I don't know about the current VMWare licensing schemes, XenServer now has XenMotion apparently part of the free package so the cost overhead may not apply.
The reasons I would not use a VM:
Overhead - hardly but there is obviously some overhead.
More complex to manage - a little more complex but it's easy to learn. If you're not going for a massively large virtualized environment, training is trivial.
I'm coming in late, and feel like people have already made some of the points I would have wanted to make, but I'll briefly recap:
However, the thing that no one has mentioned yet and probably should be mentioned: If you're in the kind of shop where people may need a test server, and are likely to solve that need by grabbing a spare desktop and slapping a server OS on it, being able to offer them a VM will likely suit your and their needs much better. Virtualizing the new server can be the "reason" to allow future virtual expansion. (And, frankly, if you're not in that kind of shop, you probably already have virtualization.)
Of course, not everything virtualizes. I scored physical hardware for the management software that included PXE by describing to them what they'd need to do to turn off TCP Segment Offloading (PXE ran like a one-legged dog with TSO on, but they would have had to turn it off for the entire virtual VLAN, and they were disinclined to do that). So, if the new server is something specialized enough to be unsuitable, well, never mind.
But barring that type of specialization, it'd be worth it to me to get rid of a bunch of (potentially unmanaged) PC-class machines running server OSes lying around, now or in the future.
Absolutely, I virtualize whenever I can. This allows me to prepare to do the following in the future:
In short, unless the server is going to be running specific software that has limitations, prohibiting it from being virtualized (strict network or disk IO latency reqs usually, and with the right hardware, even those are achievable with virtualization) I try to keep things as virtual as possible.
One reason I can think of in favor of virtualizing a single server into a VM on a single host is the ability it gives you to then mess with a test environment for that "server".
If the hardware is more than capable, you could clone the server VM and remove its NIC/network abilities and isolate that clone as a "test platform" to mess with before trying the same out on the "production" server. An example would be if the server is running an ERP software and you want to test what would happen if you ran a particular script against the ERP software/database. You could do it on the cloned VM as a test first. This could then be done in conjunction with a snapshot of the live VM before deployment on it, with the added benefit of knowing it should work fine.
Creating the same cloned "test" environment could be done with a P2V of an existing physical server, but you'd then require an additional physical host to place your new test VM on...in the above everything can reside on the same physical hardware (which nowadays is almost always overkill for a single VM)
If your use case doesn't require 100% of the power from dedicated hardware then I would go virtual every time. It provides flexibility, snapshot facility and the built in console access (even though you should use out of band management as well)