On my desktop computer I have VirtualBox, and i can run a lot of concurrent VMs, up to near native speed.
On my server, that is twice as powerful than my desktop computer, I have debian+VMware server 1.0 (because I don't like the java bloat introduced with 2.0), and if I run a single VM, it runs up to near native speed. The real bottleneck is disk access speed: if I start TWO (yes, just 2!) VMs at the same time (read: when the server will turn on), the server will be paralyzed for 40 minutes! 40 minutes for booting 2 Windows VMs! Completely useless! I had better performance when I installed VirtualPC on a Celeron 400 Mhz!!!! If I search for "vmware slow hdd access", I get tons of results, so, I assume this is an huge VMware problem, right?
So I was thinking one of this actions:
- Replace the server HDD with two SSDs in RAID 0
- Switch to Proxmox VE
Someone tried Proxmox? How better it is? Will it fix the bottleneck? I don't have another spare server to experiment with, so, if I wipe my server to play with proxmox, I will lose at least 2 working days...
I have seen this behaviour when I assign too much memory to the VM's. When I start a VM that grabs memory from the host OS above some threshold, everything dies except for the hard drive LED. It takes an age just to shut down the VM.
Fine-tuning the memory footprint of the VM's has done wonders for me.
It sounds like something is seriously wrong with your setup because there's just no way it should take 40 minutes for a couple of VMs to boot.
If disk I/O is an issue your best bet is to add drives and dedicate a drive (or RAID array) to each VM.
Booting two VMs from the same hard drive will cause drive thrashing (the heads jumping from place to place, consuming more time than actually reading data), especially if the host OS is on the same drive. Boot them separately to avoid this thrashing, and your total boot time will be lower.
I always try to put my VMs on separate drives and then do not perform concurrent actions on any that share a drive (spindle) with other VMs/OSs.
Yes, VMWare Server's disk IO performance is generally pretty ordinary. I use KVM on my desktop for local virtualisation, and we use a mix of Xen and VMWare ESX for datacentre virtualisation, while keeping a close eye on KVM for that role too.
Have you installed the vmware drivers in the guest OS? If not, do so.
Make sure you have a fast disk subsystem. For years I was running four VMs on VMware Server 1.0 with no issues. Just upgraded to 2 so I'll let you know how that goes, but so far no issues there either.
One thing that helped me considerably with IO was switching from RAID1 to RAID10. Night and day difference.
The other thing you could try is adding the following lines to the VMware server config file:
And the following to your .vmx files:
See this post on the VMware forums.
Well, you might not believe it, but i wiped my server (it was only 4 days old, so there was no important data yet) and installed the Proxmox VE distribution (Debian 5.0 + Qemu-KVM + OpenVZ)
Wow! It is extremely faster than VMware on Debian!!!
There is a difference, now i explain:
VMware is good in managing RAM, the unused RAM of my VM was left free for the other VM. But, the IO will make the VM "hang up", waiting for the emulator writing to the hdd. So, if your VM are using the HDD, unless you have a RAID 0+1 set or a physical HDD for each VM, you will be disappointed by performance.
Instead, qemu-kvm won't share the unused ram between the hosts, or it does a lot more ineffectively than VMware (as i saw from the web-ui of both emu), but, i think that qemu will cache the IO on ram and then write to the hdd later. (in the web-ui there is a % indicator "IO delay: 5%") The performance gain are really better!
Im still working on bottle neck myself but I went into the vm bios and disabled all the memory and legacy settings and i had a 10 minute vista boot to almost normal ..for a vm .. im still horribly laggy with disk writes and reads but at least the machine works now.. oh and i did reduce the vm's memory from a gig to 512m . my guess is the bios caching was the problem with the slow boots (disk prob still working on) desktop works well but disk access is ........BAD .