I've been using VMWare for many years, running dozens of production servers with very few issues. But I never tried hosting more than 20 VMs on a single physical host. Here is the idea:
- A stripped down version of Windows XP can live with 512MB of RAM and 4GB disk space.
- $5,000 gets me an 8-core server class machine with 64GB of RAM and four SAS mirrors.
- Since 100 above mentioned VMs fit into this server, my hardware cost is only $50 per VM which is super nice (cheaper than renting VMs at GoDaddy or any other hosting shops).
I'd like to see if anybody is able to achieve this kind of scalability with VMWare? I've done a few tests and bumped into a weird issue. The VM performance starts degrading dramatically once you start up 20 VMs. At the same time, the host server does not show any resource bottlenecks (the disks are 99% idle, CPU utlization is under 15% and there is plenty of free RAM).
I'll appreciate if you can share your success stories around scaling VMWare or any other virtualization technology!
Yes you can. Even for some Windows 2003 workloads as little as 384MiB suffices, so 512MiB is a pretty good estimation, be it a little high. RAM should not be a problem, neither should CPU.
A 100 VMs is a bit steep, but it is doable, especially if the VMs are not going to be very busy. We easily run 60 servers (Windows 2003 and RHEL) on a single ESX server.
Assuming you are talking about VMware ESX, you should also know that is able to overcommit memory. VMs hardly ever use their full appointed memory ration, so ESX can commit more than the available amount of RAM to VMs and run more VMs than it actually 'officially' has RAM for.
Most likely your bottlenech will not be CPU or RAM, but IO. VMware boasts huge amounts of IOPS in their marketing, but when push comes to shove, SCSI reservation conflicts and limited bandwidth will stop you dead way before you'll come close to the IOPS VMware brags about.
Anyway, we are not experiencing the 20 VM performance degradation. What version of ESX are you using?
One major problem with a large environment like that would be disaster prevention and data protection. If the server dies, then 100 VMs die with it.
You need to plan for some sort of failover of the VMs, and to plan for some sort of "extra-VM" management that will protect your VMs in case of failure. Of course, this sort of redundancy means increased cost - which is probably why many times such an outlay is not approved until after its benefits have been seen in practice (by its absence).
Remember, too, that the VM host is only one of several single point-of-failures:
This is just a few: a massive VM infrastructure requires careful attention to prevention of data loss and prevention of VM loss.
No statement on the viability of this in production, but there is a very interesting NetApp demo where they provision 5440 XP desktops on 32 ESX hosts (that's 170 per host) in about 30 minutes using very little disk space due to deduplication against the common VM images
http://www.youtube.com/watch?v=ekoiJX8ye38
My guess is your limitations are coming from the disk subsystem. You seem to have accounted for the memory and CPU usage accordingly.
Never done it - but I promise you'll spend much more than on storage to get enough IOPs to support that many VM's than you will on the server hardware. You'll need alot IOPs if all 100 of those are active at the same time. Not to sound negative but have you also considered you're putting a lot of eggs in one basket(sounds like you're after single server solution?)
I would be most worried about CPU contention with 100 VMs on a single host. You have to remember that the processor is NOT virtualized so each machine will have to wait for access to the cpu. You can start to see contention by looking at ESXTOP i have been told anything over 5 in the %RDY field is very bad by VMWare Engineers.
In my experience i've seen about 30 - 40 servers running on one host (not doing too much).
I had 10 Hosts on VMWare Server 1.0.6 (under Windows 2003) and it would run into IO issues on a regular basis (and if the nightly builds ever overlapped with something else, then they would have issues). After upgrading from Windows to ESXi U3, we found that our performance problems went away (nightly builds no longer failed).
Also note that while SSDs have a much higher IO rate than spinning media, there are some cases where that doesn't hold, such as certain types of write patterns (lots of small writes scattered across the drive will kill performance unless the controller has a smart write buffering cache that does a good job on scatter writes).
I'd recommend investigating/testing having the SWAP files on different drives if you run into issues.
If you're going to do that then I'd strongly urge you to use the new Intel 'Nehalem' Xeon 55xx series processors - they're designed to run VMs and their extra memory bandwidth will help enormously too. Oh and if you can use more, smaller disks than few, big ones - that'll help a lot. If you can use ESX v4 over 3.5U4 too.
I've 20 something XP VMs running with 512M of ram each on a machine with 16G of ram. Less than this and they swap onto disk and that gives the bottleneck. These are always active XP VMs though.
VMware and its OverCommit feature should allow you to push more ram to each XP machine. Similar machine will share the same pages so could reduce disk writing. It is something I'd like to look into for our setup to try add more machines as our XP VMs are doing 10-20meg of continuous disk traffic.
We were unable to achieve 100 happy guests on VMWare Server, but then found that ESXi is doing a much better job. So, it appears that 100 XP vms is not a problem if you use ESXi and a decent server (a few disk mirrors to spread the I/O, a couple of I7 chips and 64GB of RAM). There is no visible delay for end users and the host resources are not maxed out (the hottest one is CPU but it's typically at least 70% idle).
PS. This question was posted by me back when we were struggling with VMWare Server.
Last time I checked, VMware recommends no more that 4 VM's per processing core for ESX, assuming one vCPU per VM.
This suggests management overheads becoming a factor.
I'm very interested to see if you can actually achieve a 4x factor on an 8 core box.