Can not find anything to that ;)
Building a Hyperconvergend Cluster as development system. This is for us state of the art.
Got 2 machines for a start, with a switch ordered to come in about 4 weeks. Both servers have 2x100gb ethernet cards from Mellanox (ConnectX 4)
Plugging a cable directly between them I get a link down. I see that the link trie(d) to come up, but failed, connection breaks down immediately.
Is direct server connection between two ethernet cards not support by QSFP28? Anything else i should be aware? Worst case I "box" the 100g link until the switch arrives.
There's no old-style crossover patch cord or whatever. Properly working passive copper (You don't use optics, do you?) QSFP link should work regardless of you switch Vs switch-less use. Are you using one of those or is it anything third-party?
http://www.mellanox.com/products/interconnect/ethernet-direct-attach-copper-cables.php
Reference Mellanox thread (unanswered so far).
https://community.mellanox.com/thread/4125
QSFP28 ports are all alike (just like SFP, SFP+, ...), there's no MDI vs MDI-X pinout like for twisted pair. Therefore, straight cables aren't used at all. DACs have an integrated crossover, there's nothing to worry about.
Possibly your DAC isn't compatible with the NICs, some are very picky about the branding. It's also possible that the NICs aren't configured alike. You should check the driver messages on why the link doesn't come up.
Yes, they are all crossover. There are no straight cables in the QSFP28 world.
What is unusual for RJ45 users is that all cages (plugs) have the same pinout, independently of whether they are in a switch, router, host, PC or whatever. So the whole thing works perfectly with one cable type for connecting any type of device among each other.