I have a File Server running Windows Server 2012 R2. It has four 6TB Western Digital RED drives. I want to host my Hyper-V Machines, WSUS (Windows Server Update Service) content and WDS/MDT deploy ISO and files there for example.
I am presented with SMB, NFS and iSCSI options to connect to the server. Meaning, I have the option of either setting up a share via NFS or SMB or creating a virtual iSCSI drive in which I can connect to.
How do I determine which option is more ideal for specific solutions or does it not matter which option I go with as long as the service can access the storage?
For direct connection to a server—for true server related storage—iSCSI is the way to go. And you would then manage the user access—via SMB/CIFS or NFS—via the server.
But when you say this following quote it is a bit confusing as to what your question is where/how this storage is connected to the main server to begin with:
Is this simply a physical Windows server with four 6TB Western Digital RED drives? Or is this a server that operates on its own and the four 6TB Western Digital RED drives exist on a NAS?
Or are you describing your connection from the client side? Meaning you will have this Windows server with four 6TB Western Digital RED drives and you then want to connect to it?
My guess is the later. In general, you only need to use iSCSI if you need to have storage setup as if it is an physical drive connected directly to your machine—even if this is over the network—since iSCSI is purely raw space. Meaning when you connect via a freshly setup iSCSI volume you need to format it. I only do that when there is a need for massive storage and the connection is fairly permanent since iSCSI allocates raw space for use.
SMB/CIFS and NFS are the more common ways sundry remote clients would connect to a machine to get data stored on the share. SMB/CIFS would be the best and most common way to connect. And the times I have used NFS is purely when a non-Windows OS is connected to the server. Such as a Linux server needing to access data in some way. But be forewarned: NFS can be a pain because it’s simply not as simple to setup on the client side as SMB/CIFS.
So the breakdown would be:
iSCSI: Permanent, preallocated network connected storage for a server that needs it. It’s basically the same as having an external drive on your desktop and all sharing functionality would need to be managed by your server itself. In your case, I would recommend preallocating raw space on that device for the HyperV stuff. And then using the remaining stuff for SMB/CIFS or NFS.
SMB/CIFS: This would be the way most any client can remotely connect to your sundry shared storage. You just allocate space on the server for shares and set permissions and away you go. This is not raw space, but server connected space. And allows for pretty much any client from an OS connect remotely. But you can’t do things you can do in iSCSI like treat that space as directly connected raw space.
NFS: Basically the best fall-back alternative based on usage when SMB does not work. I use NFS mounts mainly for Linux setups that somehow need general file share connectivity, but somehow just act “weird” with SMB/CIFS.
When you say you are presented with those option to connect to the server, what are you referring to?
For HyperV, you most likely would use a locally hosted VHD(x) file on local exposed drive in the Hosts's OS. There are reasons to do some of the other methods like iSCSI, but typically they are for use cases where you need clustering or failover or have a SAN or NAS.