What is the best way to configure disks in a VMWare server that will be hosting multiple machines?
A single RAID 5 array hosting multiple VMs would provide good throughput but means all VMs are accessing the same disks (so one VM accessing the disk will delay another accessing the disk)
Skippig RAID and having a single VM per disk means the disk access will be slower in general (single disk speeds) but there will never be delays caused by another VM accessing the drives at the same time.
If all your VMs are sharing the same disks you'll get better overall throughput because your disk controller (and the disks themselves) should be able to reorder reads and writes. With a larger buffer depth, it will be able to reorder those reads and writes better. If your concern is that one VM will be doing a ton of disk access and slow down the other ones...well, you just need a hypervisor that supports quality of service and makes sure one VM doesn't starve another one (offhand, I think Solaris Containers allows this).
For the disk configuration, it depends on your workload. RAID5 performs badly if you're doing a lot of random writes (e.g. database). For sequential writes, or a workload that is mostly reads, you'll get better performance out of it. But the main downside to RAID5 arrays is their fragility - two disks fail and you lose everything. And those double failures happen more often than you'd think. Overall if you can afford it, RAID10 offers a better balance of performance and reliability.
While RAID 5 (and 4) has the benefit of writing at higher throughputs, there is a lot of concern due to failures that are not recoverable and the fact that high capacity drives are predicted to exceed RAID 5 capacity in 2009.
I had migrated to RAID 10, and typically configure 3 drives wide (2 mirrored with one spare) and as many drives deep (striped) as needed to meet the capacity requirements. Modern drives and controllers are so good at caching writes that I haven't noticed a tremendous speed impairment.