I am trying to spec out a server machine to run VMWare ESX/ESXi to host about 3-4 VM's. They will likely be hosting VM's running IIS, Apache and BIND. We don't have a huge budget for this and to a certain degree this is an experiment in virtualization. If it works, we will apply this solution to the rest of our data center.
I am concerned about performance when using RAID technology across VM's on the same box. Does anyone have any advice/experiences they can share either in favor of RAID or opposed to it with multiple VM's (Hopefully we can avoid a general anti/pro-RAID argument). If your experience results in recommending against RAID in this case, how do you handle redundancy/availability? Thanks!
Given your application profile (i.e. doesn't sound like it's going to be doing too much writing) I'd say you'll be just fine with R6. Oh and worry more about how many VMs you store in a single datastore/LUN (keep to less than 4 for decent performance) than how many VMs are managed by the array.
The best bang for the IOPS is RAID10 as you get redundancy and aggregation of all the disks potential for speed. RAID6 is slower than RAID5 for performance, and if you're doing databases and BIND you'll want that extra speed. RAID10 can be made from as few as 4 disks just like RAID6 so it's within your budget.
With 3-4 VMs running on local storage unless you have horribly slow drives and a poor raid controller you won't have any problems. If you can't go to 15k scsi the use raid 10, if you can get the 15k scsi then any raid will perform sufficiently.
I just had a box running VM, in this case over Windows Server 2003.
I'd recommend using at least 4 disks, in RAID10 configuration, giving enough IOPS and redundancy. The tricky question is the hardware compatibility with ESX as not all disk controllers are compatible.
In our experience, we run 8 VM's (Windows Server 2003) at very low capacity and the disk wasn't the problem.