What storage connection method should one prefer to use for connecting ESX servers to a shared storage server with 10GbE links?
Speficically, I have 2 ESX servers for VMware ESX and one server for shared storage.
The storage server is 2 x Xeon E5504 2GHz, 24GB RAM, 12xSSD + 12xSATA and battery backed RAID. The ESX servers are much the same but with 2 small SAS drives.
All servers have 10GbE adapters connected like so:
I have a licence for ESX 3.5 but for testing purposes I am currently running ESXi 4.1. The storage server runs Windows 7 for testing purposes.
I am aware about at least 3 methods:
1. ISCSI
2. NFS
3. FCoE
Which one would you recommend to choose and why?
No 'if's, no 'but's - if you have the option of using 10Gbps FCoE and your configuration has proven stable then it's the best and only way to go.
It's still quite new but the efficiencies are overwhelming in comparison to iSCSI, and NFS is just plain 'different'.
Be aware however that you should be right up to date with ESX/ESXi 4.1U1 for best FCoE performance/stability and the list of supported 10Gb NICs/CNAs is quite limited but other than Infiniband systems I've never seen shared performance like it. I'm currently moving all of my FC to FCoE, though this won't be complete for over a year due to the volumes involved.
If your goal is ease of use, you may want to consider NFS. It has a mediocre performance overhead (-~5% overall throughput, +~20% storage-related CPU) when compared to the FC.
Here's a comparison of NFS vs iSCSI vs FC in a 4Gb and 10Gb environment:
http://blogs.netapp.com/virtualstorageguy/2010/01/new-vmware-and-netapp-protocol-performance-report.html
I am not sure how this would apply to your environment, but I can share with you how ours is currently configured.
First, we are 100% iSCSI for the shared storage and we are running it over redundant dedicated 1GB network links. We currently have 4 ESXi hosts inside of an IBM BladeCenter chassis.
Each host has 2 internal drives set in a mirror array that stores the base ESX installation.
These hosts all share centralized ESX storage on a pair of HP LeftHand iSCSI SAN nodes which reduced the complexity of SAN management as the SAN OS on them handles the actual drive allocations internally for each LUN and moves them around to provide maximum redundancy as the SAN cluster itself changes.
Simply, we went with iSCSI because it was the default method for our new SAN nodes, but I couldn't imagine taking any other approach at this time. We have had 0 outages over the last few months and are running our entire SAP infrastructure (DEV/TEST/PROD and the related databases) all on these 4 machines.
Hopefully, this provides a little bit of insight into another architecture utilizing iSCSI and ESX.
I've no experience using Windows 7 as a storage server, but I have a small VM environment that I use NFS backed onto Solaris, and that works really well. Very easy to set up and configure and the performance is quite good. I've got some iSCSI shares through COMSTAR on there as well, and the performance is similar.
Cakemox is correct though, just because I get good results on NFS talking to Solaris, doesn't mean that NFS is right for a Windows based solution. You might find better support in the iSCSI realm.
As for FCoE, I didn't think that was supported by vSphere/ESX?
I doubt that this affects your Win 7 testing storage server, but if you're going to replace the storage server with a different dedicated SAN or NAS, check the capabilities of your vendor and how that will affect your storage presentation.
For one vendor (EMC in our case), they recommended the VMWare servers use NFS to access the storage, rather than iSCSI. Exposing the VMs via NFS lets the SAN/NAS hardware perform deduplication and thin provisioning. iSCSI to VMFS datastores on the same hardware would prevent the SAN/NAS features from running.