I have 6 x 500GB SATA drives that I want to create a RAID 10 off of, this gives me roughly 1.3TB. Would it benefit me to create two datastores (split the 1.3TB in half) or just create one large one? I need to accommodate 22 VMs.
I thought of creating 2 x RAID5 arrays (3 disks per array) but everything points to running a RAID 10 as opposed to a RAID 5.
performance wise use RAID10.
Also create as larger volumes as you can e.g all disks in one array (bear in mind the 2tb max datastore limit with ESX)
This will allow one VM with heavy disk activity to read from all disks for the fastest performance if other VM's aren't heavily using disk.
Splitting it just halves the performance and forces some segmentation for little point as you also half your throughput for each RAID array.
Typically if you are not pro actively managing your disk IO then just lump together as many disks as possible and let the hypervisor handle the load balancing/prioritisation.
4.1 of vsphere has also been hinted to contain tools to prioritise disk access for particular VM's should you want to do so which may well solve your problem in a different way.
RAID 10 offers high performance as has been discussed, but you lose a good bit of space in your array. We've gone RAID 50 on our SAN. RAID 10 has high overhead but the performance and reliability is good. On RAID 50 you pick up mmuch better efficiency over RAID 10. You also get more space. For example, let's use a 16 drive array with 450GB SAS drives. On RAID 10 will yield you with just 3.6TB of space. RAID 50 will give you 6.3TB of storage.
Here's a good website for RAID size calculations and performance ratings.
http://raidcalculator.icc-usa.com/
I'm using 1.5TB for my datastore sizes and will use dedicated Datastores for high-availability applications like Exchange and SQL. We took a very long look at RAID 10 and RAID 50. It was the extra capacity that made us move to RAID 50 and so far the performance looks pretty good.
Instead of thinking about the size and number of datastores, think about which VMs are going to be higher IO bound, and put those on their own datastore. Definitely go RAID 10 as others have suggested.
At $WORK, we try and keep our VMFS luns between 200GB-500GB. At some point in the past, I believe this was stated as a best practice by VMware, though I do not believe that is the case any more. The reason we continue to do this is to provide ample isolation between VMFS volumes in case one of them would get corrupted. If you have one massive VMFS and it gets corrputed, all of your VMs on that datastore are at risk, wheras if you split things out, only a subset are at risk.
Regarding RAID levels, I'd agree with ccame that RAID10 will give you much better performance (especially for writes) than RAID5.
For optimal performance you may want to consider creating 3 500GB RAID 1 containers and splitting disk-heavy workloads across them -- This reduces/avoids the chance that two disk-intensive VMs will be hitting the same physical disk (so theoretically fewer head seeks) if you micromanage correctly (if you mess up you may degrade performance).
Practically, I see no problem with a big RAID10 container and a single VMFS volume. I don't think splitting a single RAID container into multiple VMFS volumes will buy you anything performance-wise: You're constrained by physical limits that exist independently of the VMFS Volume definitions (but someone will correct me if I'm wrong).
Edit to add: As others have mentioned, RAID5 is not a great idea performance wise :-)
Is performance your only consideration? You've received good advice on that so far, but consider things like snapshots. If you want to use these (and backup software that works at the vmware host/storage level probably will even if you don't plan to otherwise) then make sure you consider this when planning and distributing disk workloads.
My 2 cents: go with Raid 5 and create one (1) disk group with 5 drives + 1 spare; that will give you aprox 2 TB of usable space which can be divided in 2 x 1 TB data stores (choose 4MB block size for the VMFS DS).
Make sure that one SP is the primary controller for the 1st LUN and the 2nd SP for the other, performance is going to be good (limited only by the fact that you are using SATA Drives) and 1TB Data Stores is the sweet spot for VMware because you do not have too many (administrative nightmare) but you still have enough space to accommodate large vmdks if you need to.
with RAID 5 you will get CPU and write speed penalty (parity needs to be be computed - CPU, then needs to be written on disk - reduced write speed/IO for other operations). If you have hardware raid you can care less about cpu penalty, but other problem will persist, and your ~600 IOPS is not a lot.