I've recently moved from VMWare Server to ESXi 4.0. Running on Dell T310. My VM's have been restored but they are running dog slow compared to VMWare Server. I loaded ESXi 4.0 using only default values.
Where are some areas where I can tweak the performance? Even logging onto the VM's can be extremely sluggish. Trying to install software on any of them is a new experience in pain.
Dell PowerEdge T310
Xeon X3460 2.80 GHz
32 GB RAM
1 HD (2 TB)
I have 16 VM's on this server, but only six or so will be running during my testing. I keep an eye on the Resource Allocation and Performance tabs for the host and I never see CPU or RAM getting anywhere close to pegged. Events tab does show some notices for video RAM issues and some hints on Windows activation issues, but nothing that would point to the sort of sluggishness that I'm experiencing.
1 Windows Server 2008 R2 (64-bit) - 4 GB RAM
1 Windows 7 (32-bit) - 2 GB RAM
1 Vista (32-bit) - 1 GB RAM
3 XP (32-bit) - 1 GB RAM
Over to you!
Thanks - Shawn
My guess is that you are running this all on 1 HD. VMware is all about the IOPS and the first thing to go is usually the IOPS. Use ESXtop to check your IO numbers. 1 HD is good for about 150 IOPS
To continue on @Jim B's comment: I'd guess you have a 7200RPM SATA drive since you mentioned 2TB capacity. That drive is going to give you under 100 IOPs, which will really struggle if those VM's are doing pretty much anything with IO.
If it's possible - an option would be to add a SSD drive to your box. If your VM's are thin provisioned(you can convert them with vmkfstools) and don't have a ton of data it would serve you well.
Your problem is the Hard disk. As Jim B said use ESXTOP or the Disk counters in the VI Client's Performance Tab to check the actual numbers but you will almost certainly find that the disk latencies are very high (many 10's of ms if not 100's) and queue lengths will be long, in your case a sustained queue length over 1 will be a problem becausee you only have one disk to deal with the IO requests.
1 2TB HD is good for around 80 IOPS under stress because it's a 7.2K SATA disk at best, if it's a 5400RPM drive it will be even worse. Running 6 assorted Windows VM's off a single disk concurrently is going to be dog slow on any platform. You need between 30-50 IOPS per Windows VM typically and more if they are doing any sort of work - and new installs of Windows 7 and Vista will have indexing running like a train for a while which will certainly stress the disk IO. At a minimum you will want three or four SATA disks in RAID 5 in order to make this set up bearable. You say that things were better with VMware Server - were you running this many VM's concurrently with that and if so with what hard disk subsystem?
In addition to the HD stuff, make sure you install VMWare tools, without those even in the highest performing systems I've seen things be dog slow until they where installed.