I'm searching for good rules of thumb to understand when NOT to virtualize a machine.
For example, I know that a fully CPU-bound process with near 100% utilization is probably not a good idea to virtualize, but is there any sense in running something which leverages the CPU most of the time a "substantial amount" (say 40 or 50%)?
Another example: if I virtualize 1000 machines, even if they are only lightly or moderately utilized, it would probably be bad to run that all on a host with only 4 cores.
Can someone summarize hints about virtualization based on machine workload or sheer number of guest machines when compared to host resources?
I typically virtualize on Windows hosts using VirtualBox or VMWare, but I'm assuming this is a pretty generic question.
Things which I would never put in a VM:
Anything which uses specific hardware which cannot be virtualized: usually graphics, quite a few hardware security modules, anything with customized drivers (special purpose network drivers, for example).
Systems with license issues. Some software charges per physical CPU or core, no matter how few you have allocated to the VM. You'd get hit in an audit if you had software licensed for a single core running in a VM on a 32-core server.
Things which I would discourage putting in a VM:
Software which already makes an effort to use all resources in commodity hardware. Machines working as a part of a "big data" effort like hadoop are typically designed to run on bare metal.
Anything which is going to be finely tuned to make use of resources. When you really begin tuning a database, VMs contending for resources are really going to throw a wrench in the work.
Anything which already has a big bottleneck. It already doesn't play well with itself, it will not likely play well with others.
There are some things which are quite awesome for putting in VMs:
Anything which spends quite a lot of time idle. Utility hosts like mail and DNS have a difficult time generating enough load on modern hardware to warrant dedicated servers.
Applications which do not scale well (or easily) on their own. Legacy code quite frequently falls into this category. If the app won't expand to take up the server, use lots of little virtual servers.
Projects/applications which start small but grow. It's much easier to add resources to a VM (as well as move to newer, bigger hardware) as opposed to starting on bare metal.
Also, I'm not sure if you are exaggerating about putting a huge number of VMs on a single host, but if you are trying for a large VM:HW ratio, you may want to consider ESX, Xen, KVM instead. You'll fare much better than using VMware or virtualbox on Windows.
Disk subsystem. This usually the least shareable resource. Memory, of course, but that one is apparent.
Disk subsystem limitations work in both ways. If a system uses a lot of disk I/O other guests slow down. If this guest is in production it propably needs fast response to web queries. This can be very frustrating and also a big reason why not to rent virtual hardware. You can minimize this problem by using dedicated disks.
Using only 512 MB memory in Guests puts all disk cache on the host. And it's not equally divided among guests.
Do not worry about CPU IO. In this way virtualization is very efficient, often related as only multiple processes running on same system. I seldom see multi-xeon systems running 100% on CPU.
edit: typos
There are two points to virtualization performance.
On shared bottlenecks, who else is on the same iron? If you are co-located in a virtualized environment, you are very dependent on the hosting partner being honest with you.
I think the main question for raw performance (particularly interactivity) to ask is which parts of the virtualization system are emulated. This differs depending on setup. Disk and network are the typical candidates. As a rule of thumb, emulation doubles the performance "cost" of performing an action, so any hardware latency time should be counted double and any thruput number should be halved.
Ultimately, any high performance load shouldn't be virtualized. The performance overhads of virtualization are non-trivial. See the results of my tests here:
https://altechnative.net/virtual-performance-or-lack-thereof/
OTOH, if you are looking to consolidate a number of machines that are mostly idle all the time, virtualization is the way forward.
Good answer from anttiR.
In addition, time critical systems. I just figure out tha the Hyper-V dime rot (vm slowly falling behind, all modern OS in vm's do that, get resynced often) is not playing to nice with some time critical applications I am developing. Plus I am going to use "a lot" cpu there, and planning to get a 12 core machine just for that application in production.