So, I'm getting a bit confused here. I have a Dell MD3200
SAN with dual controllers. Each controller has an Ethernet management port that connects to a switch. Together the two controllers connect to independent switches for redundancy.
I'm building out my failover cluster and was told that I should have the SAN management port running over the same subnet, which will isolated to a VLAN. If I just leave it at that, it makes sense, however, if I want to manage the SAN out-of-band through the network on a server that's not on the subnet/VLAN, I won't be able to.
So, the question is, if I have a 10.0.1.X
subnet for my servers and a 10.0.2.X
subnet for the failover cluster do I make the SAN a part of 10.0.1.X
or 10.0.2.X
? I personally think I should leave it on 10.0.1.X
, but I figure I'd ask just in case.
UPDATE
Just to give you more info, the servers in question are Dell R710
s (x2) with x8 1GbE ports. The switches are Dell 6224
s (x2). I was planning on the servers having 3 teamed connections. 2 ports on a "Public" team, 2 ports on a "Private" team (for the cluster) and 4 ports for the "Virtual Machines" team.
The SAN is an MD3200
, not the MD3200i
. It connects to the servers with SAS via redundant HBA cables.
The cluster is going to be a Windows Server 2008 R2 Failover Cluster for Hyper-V.
The MD3200 connects to servers via SAS; so I'm thinking you mean the MD3200i. It's "normal" to separate vLANs for management, iSCSI, and client traffic. You don't mention what kind of cluster software you're running, but for most it's pretty common for it's cluster/heartbeat traffic to have it's own vLAN.
Depending on your server and hardware each of these may have their own physical NICs or may be shared. Also normal to have a minimum of 4 NIC ports for 1GbE; or 2 ports of 10GbE/FC/IB/NCA and 2 ports of 1GbE. It's also normal to have a L4 switch or router with access controls that allows all of these vLANs to be routed (selectively and where applicable).
Breaking that down:
Other thoughts: The NIC chips really matters. Make sure your servers have something good and Google for cluster or SAN problems with the particular chip. I run Broadcom BCM5709 chips for instance (all my servers use the exact same chip). These chips had problems with past firmware that has been resolved. Intel chips tend to be very good. In any case check it out and be sure to run the latest firmware, drivers, and management software.
Generally you should have two independent vlans, one for each iscsi network. The management interface generally should sit along side your servers or in your general administration vlan depending on setup.
Your question boils down to how important your management interface is. If you've suffered some sort of network failure, you can still use your SAN while the management ports are unavailable. You simply can't change it by accessing the management port. While it's not ideal, you always have the ability to get in via serial or crossover in an emergency.
"Failover cluster" means different things to different people. If you need a heartbeat between the two storage controllers, then they need to be able to ping each other. If it's just a standby device kept in synch to be used in case of disaster, then you could put it on its own subnet, vlan, or even network. Just so long as the replication traffic (which I'll assume is IP) can be routed there.