(This may not be possible, but I thought I'd ask just in case it is, as it will save a considerable amount of cash.)
I'm building cluster of sorts that has one shared storage unit and two computing units.
I'd wondering if it is possible to bond two 1GBASE-T NICs per computing unit and connect them directly to an identical set of NICs on the storage server without a switch in between and alternate which NIC the packets are being transmitted on and have them reassembled on the other end (mode 0 - round robin?).
This would theoretically increase throughput and of course CPU usage.
We are talking Linux or BSD here. Please do not mention Windows.
There may not be a standard for this, but perhaps there is a piece of software or kernel hack that does this.
The Linux bonding driver (not sure but expect there is a BSD equivalent) creates software bonds of NICs independent of any particular switch technology. I haven't tried it without a switch but as it's all done client side I suspect it should work exactly the same with a crossover configuration:
http://www.kernel.org/doc/Documentation/networking/bonding.txt
on FreeBSd you have lagg. I am using it in the failover mode, but
man lagg
also mentions a loadbalance, roundrobin and lacp as options.