I have a pile of HP SAN and server boxes behind my desk.
Long story short, I'll have 2 vSphere hosts, each with 12 pNICs, and a single Procurve 2910al switch (actually two but linked) dedicated for iSCSI/vMotion traffic, and the iSCSI SAN (P4000).
Some NIC's will be allocated to my production LAN as VM NICs and so I can access the vCentre server, and the iSCSI traffic and anything best kept of the production LAN will go on the 2910al.
I want to present iSCSI LUN's to some of the guests (Exchange/SQL/maybe the file server) using the Windows iSCSI initiator so I can use the SAN integrated VSS snapshots.
I also ideally want to be able to manage the SAN from the production network, so I guess I'd use the routing function on the switch for that?
I'd appreciate suggestions on the optimal way to configure the switch/VLAN layout.
From the vSphere perspective you want redundancy on the Service console\Management network. Ideally two separate pnics connecting into two independent switches. So two nics there on an isolated management VLAN.
Vmotion\Fault tolerance (if you are using the latter) again at least two pnics connected to two independent switches. So two nics there on a separate VLAN on the procurve 2910(s).
For iSCSI that will be presented to the vSphere environment you again want at least two and in this case it is very, very important that they connect to independent switches. From a cSphere perspective the recommendation is to configure as many separate vSwitches, with a single VMKernel port on each and a single pnic. Each of these vmKernel ports should have vMotion\FT traffic disabled. All iSCSI vmkernel ports then need to be bound to the iSCSI stack to enable failover, and multipathing if that is an option for your array. It is possible to get load balanced native multipathing without needing Enterprise Plus provided your vendor has a Multipathing Extension Module (MEM), if they provide a Path Selection Provider (PSP) then Enterprise Plus is required. I'm not sure how the LeftHand handles this, worst case is that you get failover but no actual load balancing. On the switch front keep the iSCSI connections on their own VLAN on the Procurves.
For native iSCSI presented to the VM's you want the same sort of resilience, with at least two pnics connected to separate physical switches. Ideally you should repeat the pattern used for the vSphere iSCSI switches - one pnic per vSwitch and one VM port group per vSwitch. That enables the internal multipathing components within the VM's to make sensible path management decisions and gives them some visibility onto the connection state of the path. If load balancing isn't important for you in the VM's then simple teaming and a single vSwitch will do but given your plans that doesn't seem optimal. Again these connect to the Procurves.
I'd very strongly recommend that your two Procurves are not configured as a single logical unit (using stacking) if at all possible. Lots of inter switch links configured as a single LAG is better, 10Gig if possible, from a manageability and safety perspective. Consider how you will upgrade firmware on these switches or carry out other maintenance down the line.
In any case that uses up 8 of your 12 pnics on each vSphere host leaving you with four for production traffic which seems reasonable enough.
You can then manage the iSCSI environment from VM's that have vnics connected to the iSCSI VM Port groups if you like, otherwise you will need to provide some connectivity between the procurves and your production environment. Definitely handle that at layer 3 - you want to keep the iSCSI environment as free from extraneous traffic as possible at layer2.
I'm guessing that no one is really going to be able to provide specifics without more information regarding what edition of vSphere you've got (only Enterprise and Enterprise Plus support some of the advanced features like storage multipathing), in addition to what your requirements are for BW usage and fault tolerance. With that said, when planning for this stuff, I always break the NICs down into the following four categories, then determine what my requirements are for each, followed by what can actually be accomplished:
Do you need redundancy for each? Do you need more than 1G for each? Do you have enough NICs to provide redundancy? (In this case, you do, IMHO.) Once you make these decisions then planning what to actually plug in where should be a piece of cake (assuming that you are going to maintain separation between the four networks above, which I recommend).
So it looks like you'll need at least
Good thing you got 12 NICs in each host, cause you are already at the limit!
Assuming the switches behave as a single unit, you'll need a VLAN for management, one for vMotion, and one for Storage, and you'll connect everything up so that each group has equal connections to the two physical switches.