I have two Netgear GS748TS 48 port gigabit stackable switches, which when stacked via two HDMI cables allow hosts on one of the switches to talk to hosts on the other.
Netgear claim that the HDMI trunk allows for 10Gbps of bandwidth, but when I test using 4 computers and iperf, it appears that only 1Gbps of bandwidth is available for communication between the switches.
Is additional configuration required? Am I missing some basic networking concepts here?
I'd really like to keep all of the client ports for, well... clients, rather than sacrificing 16 or more of them for a trunk.
Any help is appreciated, many thanks.
EDIT: I'll re-test with higher quality HDMI cables and report back!
You need to ensure you have a HDMI v1.3 or 1.4 compliant cable to achieve this, even then you'll never see more than 8.16Gbps due to overhead.
I've never seen HDMI being used this way, most sysadmins just buy switches with 10Gig ethernet ports to deal with the...erm, 10Gig ethernet. I like the ideal of the HDMI but it sounds a bit fragile a solution (as is the case here) and it seems a bit 'prosumer' to me. Imagine it's cheap though.
the stacking ports are full duplex so a single cable completes the ring.
A hmdi plug is probably stronger than a LC fibre plug, but as the HDMI loop is in the rack (I doubt you could run a 10m HDMI and have a working stack) strength/latching capacity it a bit irrelevant.
I would say, having bought stacking kits in the past for several hundred pounds a £3 HDMI lead from CPC suites me fine!
As for dodgy test topology, I don't see how you can test a 10GB link without 10GB infrastructure, to find true speed you would surely need to have a couple of servers with 10 concatenated 1GBs connections, I suspect as above the test topology used showed results for a single port, as I fail to see how it could scope the backplane bandwidth.
How are you testing exactly? It sounds like its your testing method at fault, not the switches or cable. The fact its rounding off to a neat 1gbps would dismiss the claims the cable quality is at fault.
I guess you are either testing with either,
The former will always be limited to the connectivity of the server, so if its a non bonded (LACP) connection, you'll only ever see a 1gbps cumulative result.
The latter, if both tests were run simultaneously, should see a total of 2gbps throughput across both iperf tests. So if you're not getting this result, but are using this method - then it sounds like a config issue.
Surprisingly, the netgear forums for the prosafe kit are pretty useful, they have a few mods on there who are higher level netgear techs; so its definitely worth pursuing a solution there too.
Have you tried teaming ports between the two ends? You will definitely only get 1 gigabit if you're using a single port. If you have computers with multiple gigabit network cards, try setting them up in a team, then set up the switch with the ports in an LACP trunk, and then run iperf between them.
I see you said that speed dropped when two iperf sessions ran at the same time. This test will eliminate the possibility of the cable being faulty.
a HDMI lead is just a neat version of a LVDS link, you could use CX4, U320 VHCDI or even DVI, first, you say 2 HDMI leads, I assume you have not used both! you stack a switch with a single cable.
HDMI is probably tougher than LC fibre so really, what's the issue, I have 2 GS748TS' stack and can easy move 5gbs across them (I have a 4gbs LAG from a quad card in my file server feeding one switch and the render farm on the other can pull over 380MB/s from that server...
I think you have a dodgy testing topology IMHO