I'm considering setting up a software-based iSCSI SAN on my home network. I'm looking at this because I've got two VM servers running and would like a common, easily managed storage pool rather than managing each VM server separately.
What I'd like know is... are there real benefits to using multiple NICs in each of the servers that would be comprise the iSCSI LAN. I'm thinking of having four gigabit PCI* adapters in the iSCSI server and two in each of the VM servers. (For the purposes of this question, lets assume that disk access will not be the limiting bottleneck.) All eight ports would be connected to the same unmanaged gigabit (consumer-grade) switch which would only have iSCSI traffic. (Assume there's a separate LAN for regular traffic.)
Is there a point to setting up the hardware this way? Will I actually see a reasonable increase in the available speed?
*I say PCI because the storage server will only have PCI slots available.
(edit - I'm not concerned with redundancy in this scenario.. simply getting the highest bandwidth possible out of the hardware)
If I'm understanding your scenario correctly, I assume you're going to team the multiple physical network cards from within your virtualization environment and present that to your virtual machines? If that is the case, I think you'll have a problem with getting greater performance with incoming connections to the VMs.
You see, outbound connections can be load balanced by the virtual environment, if it supports that. However, inbound connection load balancing is a property of the switch and is physically impossible for the virtual environment to influence. I could get into the basics of ARP tables and IP addresses, but I'll assume you already know that. Traffic coming in to the VM can only be directed to one physical port on the switch unless the switch supports something like Etherchannel. EDIT: Let me clarify that. Traffic coming from the VMs and going to one target can only be as fast as the single-port speed of the unmanaged switch. If multiple PCs, each on their own port, access the VM and the VM responds to the multiple requests, then it can simultaneously carry multiple port-speed traffic streams with the multiple end points. In that sense, you would see a performance increase if you frequently have multiple simultaneous connections to the VMs that tend to peg the port-speed of the switch. I hope that made sense. End Edit.
Second Edit Since you want to improve the throughput on incoming traffic to your iSCSI target, there's really no way that NIC teaming would help you. The bottleneck will still be that the switch can't aggregate ports so the iSCSI target's IP address will always be limited to one switch port and thus all incoming traffic will be limited to the port speed of your switch. The best solution I can some up with is put two NICs in the iSCSI target and dedicate one NIC with it's own IP address to one VM's LUN... if that's even possible with whatever you're using as an iSCSI server. In your current setup, the only benefit you can ever get from virtual NIC teaming is sending multiple traffic streams to different ports, but you can never receive more than port-speed traffic on any host on that switch. End Second Edit
Off the top of my head, I'd assume that such a redundant/teamed setup like that would be overkill for a home environment unless you specifically want to gain experience with it. However, nothing will be able to tell you if you need that kind of performance like doing some network analysis and graphing your bandwidth usage. Again, seemingly overkill for a home setup unless it's for experience's sake or your hosting pr0n. =)
You will get greater throughput capacity, which given all the other likely variables you will never use or notice. But if you're just looking to play and learn, yes, iscsi is pretty trivial to round-robin multipath.