We are in the process of setting up two virtualization servers (DELL R710, Dual Quadcore Xeon CPUs at 2.3 Ghz, 48 GB RAM) for VMware VSphere with storage on a SAN (DELL Powervault MD3000i, 10x 500 GB SAS drives, RAID 5) which will be attached via iSCSI on a Gbit Ethernet Switch (DELL Powerconnect 5424, they call it "iSCSI-optimized").
Can anyone give an estimate how much faster a fiber channel based solution would be (or better "feel")? I don't mean the nominal speed advantage, I mean how much faster will virtual machines effectively work?
Are we talking twice the speed, five times, 10 times faster? Does it justify the price?
PS: We are not talking about heavily used database servers or exchange servers. Most of the virtualized servers run below 3-5% average CPU load.
There a lot of determining factors that go into the performance feeling here. One tweak you might consider is setting up Jumbo Frames. Scott Lowe has a recent blog post here that shows some of what he did to achieve this.
You mention that the guests will be running low CPU load - those are always great candidates for virtualization - but the difference between FiberChannel and iSCSI doesn't really come into play yet.
If your vm guests are going to be running storage-intensive operations, then you have to consider that the speed of transferring a read/write operation from the VM Host to the storage array may become your bottleneck.
Since your typical iSCSI transfer rate is 1Gbps (over Ethernet), and FC is usually around 2-4Gbps (depending on how much cash you're willing to spend), then you could state that the transfer speed of FC is roughly twice as fast.
There's also the new 10Gig-E switches, but your Powervault and Powerconnect don't support that yet.
However, that doesn't really mean that the machines will work faster, as if they are running applications with low I/O, they might actually perform at the same speeds.
The debate as to which is better is never ending, and it basically will depend on your own evaluation and results.
We have multiple deployments of FC-based mini-clouds and iSCSI-based mini-clouds, and they both work pretty well. We're finding that the bottleneck is at the storage array level, not iSCSI traffic over 1Gb Ethernet.
You are more likely to be bottlenecking on the number of spindles than the speed of your transport.
That is, yes, the raw speed of FC is faster than iSCSI, but if you are (hypothetically) trying to run 200 VMs off 6 spindles (physical disks), you're going to see worse performance than if you are trying to run 200 VMs off 24 spindles over iSCSI. In our nearly-idle lab environment, we're running at about 2 NFS ops per VM (240ish vs 117), so that might give some notion of how much IO you'll have.
I don't think you'll see much difference based on the transport, unless you know you have very high contiguous IO (heavy instrument data log stream? Video archiving?, I don't know what real-world scenarios might be like this, to be honest).
I really don't think you'd notice the transport unless the IOPS of the disks dramatically outweigh your load. I would go with other criteria to make the decisions (ease of management, cost, etc.)
We went with NFS on a NetApp the last time we added storage.
It's a very noticeable change. Though to be truthful, we were going from Linux Server Based iSCSI (fake iscsi) to fiber, aka a testing environment to production when my last company was rolling out VMware based shared hosting. Our VMware rep stated that fiber is much less overhead when it comes to multiple VM's on a single ESX host needing access to shared storage. I noticed the general usage of a Win2k3 VM instance doubled in performance, but disk IO, which I tested using hdtune on the VM was faster then our Dell 2850's standard IO (3 x 73GB in RAID 5 on a perc 4 if memory serves me). Granted we were running maybe 5 or so VM's on each ESX host with low usage, as we were being trained up on it.
Your VMware rep should have plenty of documentation on Fiber vs. iscsi including some overall benchmarks, or at least real world implementation stories/comparisons. Our rep sure did.
I know the issue has been resolved, but I suggest you have a look at this article about FC vs iSCSI.
We have FC based SAN everything at 2Gbps and have tested 4Gbps HBA's and have not seen any difference in performance. The bottle necks for us are always at our drive speed, additionally you can have a FC 8Gbps based SAN and still not see any performance increases if your drives in your array are SATA or even 10K drives, it also depends on RAID type and array size. Also minus one to Mike for using the word "cloud."
I believe you'll see a doubling of overall IO capability if you move to a 4Gbps or better FC SAN.
We had large FC-served VMWare farms, there was pressure to reduce storage cost so we started building new extensions using NetApp 10Gbps iSCSI, we ran into performance problems so moved them all over to FC and it was the best thing we could have done - we're doubling the number of VMs per host and getting the same performance as we were seeing under iSCSI.
Of course our, very-mixed, VM load profiles may have exacerbated this but if you can afford FC then I'd strongly urge you that way.
I could go into the detail (in fact I happily will if prompted) but ultimately iSCSI is almost a 'something for nothing' product and we all know about free lunches :)
While this was not asked I am just curiois if RAID 5 is a good choice for virtual environment, have you considered RAID 10?