VMware and many network evangelists try to tell you that sophisticated (=expensive) fiber SANs are the "only" storage option for VMware ESX and ESXi servers. Well, yes, of course. Using a SAN is fast, reliable and makes vMotion possible. Great. But: Can all ESX/ESXi users really afford SANs?
My theory is that less than 20% of all VMware ESX installations on this planet actually use fiber or iSCS SANs. Most of these installation will be in larger companies who can afford this. I would predict that most VMware installations use "attached storage" (vmdks are stored on disks inside the server). Most of them run in SMEs and there are so many of them!
We run two ESX 3.5 servers with attached storage and two ESX 4 servers with an iSCS san. And the "real live difference" between both is barely notable :-)
Do you know of any official statistics for this question? What do you use as your storage medium?
I do a lot of VMware consulting work and I'd say that the percentages are closer to 80% of the installed base use high availability shared storage (FC, iSCSI or high end NAS) and a lot of my clients are SME's. The key factor that I've found is whether the business treats its server up time as critical or not, for most businesses today it is.
You certainly can run very high performance VM's from direct attached storage (a HP DL380 G6 with 16 internal drives in a RAID 10 array would have pretty fast disk IO) but if you are building a VMware or any other virtualized environment to replace tens, hundreds, or thousands of servers, then you are insane if you aren't putting a lot of effort (and probably money) into a robust storage architecture.
You don't have to buy a high end SAN for the clustering functions - you can implement these with a fairly cheap NAS (or a virtualized SAN like HP\Lefthand's VSA) and still be using certified storage. However if you are using shared storage and it doesn't have redundancy at all points in the SAN\NAS infrastructure then you shouldn't really be using it for much more than testing. And redundancy is (at a minimum) dual (independent) HBA's\storage NICs in your servers, dual independent fabrics, redundant controllers in the SAN, battery backed cache\cache destaging, redundant hot swappable fans and power supplies etc, RAID 5\6\10\50 and appropriate numbers of hot spares.
The real live difference between your systems is that if one of your standalone systems catastrophically fails you have a lot of work to do to recover it and you will incur downtime just keeping it patched. With clustered SAN attached systems, patching the hypervisors, or even upgrading hypervisor hardware, should result in zero downtime. A catastrophic server failure simply brings the service down for the length of time that it takes to reboot the VM on a separate node (at worst) or if you have Fault Tolerance covering those VMs then you have no downtime at all.
As a company we have over a thousand hosts, we started with FC,tried iSCSI for a while but retreated to FC due to performance problems. We are looking seriously at NFS but have no conclusions yet. Oh and we use HP XP/EVA and some NetApp, we have no local storage bar desktop/dev hosts.
As you can see, there's no one size fits all, and it doesnt have to a single class storage solution. You can have multiple storage classes depending on availability and performance requirements
For high performance writes and reads, I've found FC unbeatable in performance, and whilst the price is high... It just works... for more mundane performance expectations iSCSI has actually performed pretty well, so I've normally got the exchange server mailbox files on a FC disk subsystem and the actual boot drives on a iSCSI interface disk, with the DNS servers and Active Directory machines also running from iSCSI.
I've run ESX with SANs, NAS, and DAS. It entirely depends on:
For reliability and speed, I don't think you can beat a SAN.
For reliability and cost, I'd go with NAS.
And for speed and cost, DAS.
Not that the individual options don't overlap some, but those are the strengths I have witnessed.
We run 4 ESX 4 servers and we use an EqualLogic iSCSI SAN.
On smaller installations local storage is perfectly acceptable as long as you have decent disks - i'd say 10k RPM+ SAS drives. The only time you MUST use a shared disk (i intentionally didn't say san as your shared disk can be off and NFS share) solution is when you need to do clustering - VMWare HA and DRS.
Right now we have 3 tiers of storage - FiberChannel SAN, High end Equalogix SANs and low end MD3000i SANs. The final two are iSCSI. We also run some servers off local storage of the ESX servers - mostly utility servers that we don't care if they are down for an hour or two while we fix things if everything goes boom on a box.
We are also running our test enviroment off a homebuilt NAS using 7.2k SATA drives and iSCSI enterprise target (performance isn't all that good but it gets us by).
Alot of people are tending towards running against NFS shares in larger enviroments as well. I've wanted to play with this for a while but havn't found the time.
We run four (well, four live, one for testing) ESXi hosts with an iSCSI fabric built from commodity switches, and a low end Hitachi SAN - an SMS-100.
Even at that level, we have twin controllers, each with twin ports onto a SAS backplane so either controller can seize the disks, and twin nics - which we cross wire to twin switches, and on to twin nics in the ESX hosts.
Each of the vfs volumes has four visible paths, so it's reasonably tolerant. We use Dell poweredge switching for the fabric - they have potential issues for sure (least of all, no redundant PSUs).. but they are also cheap enough that having two preconfigured spares in a box ready to swap in becomes a real possibility.
obviously, if you want more nines, you need to spend more money, but I think that the beauty of iSCSI, ESXi and commodity Gigabit ethernet kit is that you can hit above your weight in terms of resilience and performance.
Everything has its pros and cons ... bottom line is SLA, Application load, and scale. So if you need high performance storage for a small deployment (1-5 hosts), you could probably pull it off with NFS (I have actually once achieved better latency with NFS than with SAN using RAM disks). Now try scaling that and you will find that the cost of replicating your setup at scale is very comparable to a nice SAN making FC the only logical option to peruse ... Most of the time, I end up using combinations for various services (apps, DBs, backups, archives) not a single platform to optimize cost, depending on the needs.