I've setup a a pre-production ESXi server and I'd like to do some load testing on the guest operating systems I have on it.
Specificly what I'm interested in is
- Disk access, will the guest OS's be fighting for disk access, read & write
- Processor use & sharing between the guest OS's
- Memory use & sharing between the guest OS's
- Load on an SQL server within a Guest OS
- Load on an Exchange server within a Guest OS
I'm pretty new to load testing like this so I really don't even know what to ask?
I'd like to be able vary the options, so I can show at 1000 exchange users we need another Exchange VM created sort of thing.
Are they any standard bench marks?
From a linux perspective there's bonnie++ for disk benchmark/load generation, and cpuburn for CPU wasting.
Don't know any good memory thrashers off-hand, but for VMware that's not a good idea anyway as vmware assumes a certain amount of memory overlap between VMs and deliberatly breaking that assumption will just result in bad performance.
Try VMWare's own VMMark - Start here: It's probably not as specific as what you are looking for but it is a good broad measure of the performance of your underlying hardware setup and how it will scale under increasing load which is sort of what you are looking for.
Check out unix bench, if you are using *nix based guests.
http://www.hermit.org/Linux/Benchmarking/
Ideally you wont be putting guests on the system that need lots of disk IO. They are not good candidates for virtualisation.
Memory and CPU use can limited easily enough either via resource groups or on a per VM level.
I've found having limits on mhz and ram, even high limits can stop a run away process from impacting other VMs alot.
"I'd like to be able vary the options, so I can show at 1000 exchange users we need another Exchange VM created sort of thing."
Loading up one VM to the point the host is strugling doesnt prove squat, since the underlying hardware is already maxing out. You're just giving yourself an upper bounds to what you can service on the existing hardware.
In VMware you should keep an eye on disk read/write latency. Once you start seeing latency figures in the 500ms-1000ms+ range you know you are starting to hit your disk alot.
Are you going to be using direct attached storage or NFS/iSCSI boxes?
Virtualising lots of low load servers into one bigger box makes alot of sense, trying to virtualise a heavily loaded system typically results in worse performance than just buying a decent dedicated box for the job.