I was reading some articles on VMware ESXi about storage. One thing mentioned was the advantages of being able to use vMotion, DRS and etc. to achieve High Availability. The one thing that I kept seeing was the word "Shared Storage".
What does this mean?
Right now I am running my ESXi server with an iSCSI backend (single linux server).
Is that considered shared storage, even though only one server can connect to a single target.
Considering NFS allows many connections to the same data does it have advantages over iSCSI?
Can someone gives me an idea?
I am thinking about converting my iSCSI server to NFS.
Also does ESXi (4.0+) support NFSv4?
What's not mentioned here is that the VMFS filesystem and NOT iSCSI per se is what makes storage share-capable. Not all filesystems allow access from more than one system at a time. Clearly NFS does. Something critically important to note is that iSCSI is block-level over IP, while NFS is a file-level protocol/filesystem. There is a ton of advantages to NFS, that are simply not an option with iSCSI. I work for Nexenta, and every day we spend hours in debates of what is better. Ultimately, it does come down to what your environment needs, and the level of knowledge of iSCSI and NFS by the staff expected to support it.
Multiple iSCSI initiators can connect to the same target, and assuming that configuration allows multiple clients can access the same LUN. VMFS allows for this, and that's how DRS clusters work. Without this ability you could not do a lot of what clustering offers, such as Vmotion, etc.
NFS is by default a shared filesystem. When you build a datastore on NFS, assuming you export NFS to all notes in your DRS cluster, all files stored on the NFS datastore are accessible from all hosts in the cluster.
Again, difference is NFS is a filesystem. iSCSI is a SCSI-3 protocol delievered via IP. Not all filesystems delievered via iSCSI are natively capable of being shared. NFS is natively a shared filesystem.
More than a single initiator can connect to a single iSCSI target, if the target is configured to allow it.
All of the benefits of having shared storage in ESXi are available with both iSCSI and NFS. Beyond that, which is "better" to use is subjective, which doesn't belong here.
Another aspect is that you cannot reduce the size of your iSCSI/VMFS partition on your SAN, however you may be able to reduce the NFS partition depending which SAN you own. A Netapp can reduce an NFS partition (if there's space free of course).
A small side effect on your VM's, when you use NFS, you will not be able to see the disk performance of your VM's.
I prefer iSCSI over NFS for running data-stores because,
iSCSI is far more secure by allowing mutual chap authentication. iSCSI bandwidth I/O is less than NFS. iSCSI uses MPIO ( Multi Pathing ) plus you get block based storage & LUN Masking.
NFS data-stores have been in my case at least susceptible to corruption with SRM. NFS speed used to be a bit better in terms of latency but it is nominal now with all the improvements that have came down the pipe. NFS in my opinion is cheaper as almost any thing can be mounted that is a share.
I obviously prefer iSCSI but, iSCSI solutions or even FC are a bit more expensive. I would mount a NFS and iSCSI DS and run VMwark and see what your IOPs are, that would probably be the best way. As far as NIC bonding, where would you bond at ? The appliance level ( your NAS ) or at the VKernel level ?
And to answer your question ESXi 4 / 4.1 only support NFSv3.
I work in a NetApp troubleshooting environment on a daily basis and here are some notes I'd like to add which may influence one's final decision on which method of connecting to back-end storage may be most appropriate for them.
iSCSI can have significantly less overhead as it is a block-based protocol (host-side management of filesystem) and MPIO can be used with it as previously indicated here, which is a big draw to its side.
But one must also note that if you are planning on using thin-provisioning (no space guarantees on storage) in your environment, you may also need to implement some strategy to free up blocks again on the storage-side after they are released from the host lun (using the VAAI, UNMAP API - http://blogs.vmware.com/vsphere/2012/04/vaai-thin-provisioning-block-reclaimunmap-in-action.html ). Also, I do believe that VMware may have disabled VAAI UNMAP in certain releases for awhile due to some performance implications ( http://blogs.vmware.com/vsphere/2011/09/vaai-thin-provisioning-block-reclaimunmap-issue.html ).
If one uses NFS, then the blocks are managed by the storage appliance natively, and thus hole-punching to clean up once blocks are released is not necessary.
Just another consideration in the big picture...