I have a Windows 2008 server running Hyper-V. There are 6 NICs on the server configured like this:
- NIC01 & NIC02: teamed administrative interface (RDP, mgmt, etc)
- NIC03: connected to iSCSI VLAN #1
- NIC04: connected to iSCSI VLAN #2
- NIC05: dedicated to one virtual switch for VMs
- NIC06: dedicated to another virtual switch for VMs
The iSCSI NICs are used obviously for storage to host the VMs. I put half the VMs on the host on the switch assigned to NIC05 and the other half on the switch assigned to NIC06. We have multiple production networks that the VMs could appear on so the switch ports that NIC05 & NIC06 are connected to are trunked and we then tag the NIC on the VM for the appropriate VLAN. No clustering on this host.
Now I wish to assign some iSCSI storage direct to a VM. As I see it I have 2 options:
Add the iSCSI VLANs to the trunked ports (NIC05 and NIC06), add two NICs to the VM that needs iSCSI storage, and tag them for the iSCSI VLANs
Create two additional virtual switches on the host. Assign one to NIC03 and one to NIC04. Add two NICs to the VM that needs iSCSI storage and let them share that path to the SAN with the host.
I'm wondering about how much overhead the VLAN tagging in Hyper-V has and haven't seen any discussion about that. I'm also a bit concerned that something funky on the iSCSI-connected VM could saturate the iSCSI NICs or cause some other problem that could threaten storage access for the entire host which would be bad.
Any thoughts or suggestions? How do you configure your hosts when VMs connect direct to iSCSI?
With VMware ESXi (at least), your iSCSI storage is abstracted to your guests in the form of VMFS datastores, so there's really not much more to it than assigning more storage to a guest through the VI Client. While easier to administer and manage, this also gives you a layer of additional host security as your guests do not have direct access to the physical storage layer.
However if you have a legitimate reason to do this, I would think the best way for you to accomplish this would be to put your guest VM on the same VLAN as your iSCSI devices as described in #1.
When it comes to iscsi, we often make special config:
So i wouldn't mix production network with data network (iscsi).
Giving direct iscsi access to storage can be helpful on VM, if you then do snapshot through the Storage, for example. (Equallogic provide then integration between SQL/Exchange and the equallogic snapshot).
Attach the iSCSI target to the Hyper-V host, then make it a pass-through drive for the appropriate guest.