What are the opinions on allowing virtual memory inside a virtual machine?
For example, a host machine w/8 Gig of memory, I could run 4 VMs each w/2 Gig (roughly) and there would be no host swapping. However, in each VM I could have a 2Gig page file so the virtual server had 4Gig of usable memory, 2 physical 2 virtual.
OR... I could give each VM 4 Gig of "memory" and have the host use 8Gig of real memory and 8G of virtual memory and have no page file in each VM. Each VM would still have "4Gig" but the paging would occur on the host.
The warm-fuzzy part of me says setup paging in each guest like you would a real server and you're good. But the analytical side of me sees two major advantages to overcommitting the host memory and having no paging in the VM. First, the IO for the virtual memory is then handled by the host OS, which is closer to the bare metal, so it should be quicker. And second, paging would only be required if the host didn't have the memory available. If the guest wanted 4Gig, but other guests weren't using their memory then no paging would be required.
Thoughts?
I'm not a virtualization expert (in fact I think it's the wrong tool for the job most times), but from what I have read your guest OS's should not be allowed to swap. The primary reason for preventing swapping is it represents a way for one guess OS to hog a large portion of th e host's IO bandwidth.
Also, you don't want to pretend to your guess OS's that the host has more physical memory than it has as it will cause the host to swap heavily, but debugging performance issues inside the guess OS's will be very hard, because from their point of view they are not swapping, and none of the os level tools in the guest will show it.
It may even be with tools like Xen and VmWare you cannot overcommit memory on the host OS because of the use of baloon memory drivers.
That would heavily depend on the consequences of overcommitting memory on your host OS. I would be a bit more than annoyed if, for example, I'd have the Linux out-of-memory killer slay my virtual machine. I tend to set aside a smaller, separate, preallocated, snapshot-independent (if applicable to your VM solution) virtual disk for each guest OS, ensure the file hosting that disk image is not fragmented and/or on a fast drive, and configure the guest swap space to reside on that virtual disk. Hypervisor memory management today is good enough to not feel the difference between OOM host swapping and OOM guest swapping, and I can fine-tune guest behavior independently. Best of both worlds.
Own swap in VM is about better isolation of resources. Such a VM won't be able to drag host down with its RAM demands — it's already constrained. And if you put swap on disks other than VM's system, you can even have unsafe cache policy for it.
But "external" swap is about better utilisation of resources, rather.
So this is it: isolation vs utilisation — your choice.
I wouldn't give your Guest virtual memory due to IO problems. Nor would I give your Host virtual memory and the guest too much physical memory, as the guest wouldn't actually realise it was using virtual memory rather than physical memory.
Which doesn't leave you with any solutions but to purchase more memory. You really can't get an alternative for more memory. It's so cheap, if your server can support more, I'd get more.