We have just upgraded our SAN backend with 10 Gbps connectivity and we have a few Windows servers that currently use a 1 Gbps connection, which we would like to upgrade by adding 10 Gbps NICs to. Part of this project involves rolling out VLAN configuration - previously, the entire company network did not use VLANs and was on a single (and quite crowded) /24 network.
My question is, if the new 10 Gbps connection on these servers will only be used for SAN connectivity, but is connected to a layer3 core, should the new NIC installed in the server be on a "server" VLAN, or on the "storage" VLAN?
The layer 3 core switch has a 1.28Tbps backplane but I would assume it doesn't do layer 3 routing at the same speed which leads me towards provisioning the new NICs on the storage VLAN so that only switching is done, not routing.
Are there problems with one (or both?) of these approaches?
Put it on a dedicated storage-specific VLAN, chances are there won't be a need to route at all if you run all of your storage and servers into the same switches. This gives you the option to monitor JUST the SAN traffic and potentially to traffic-manage it if needed.