I know Linux can use RDMA NICs like Solarflare... I just found Intel has something like that NetEffect cards. But Intel is talking all about clusters..
Can someone please explain. If I want low-latency networking and install RDMA NIC on my server. Is there limitation on where the cable can go? Is there a specific device expected on the other end? Is it special RDMA switch or RDMA adapter before switch or what? Why is this cluster talk? What if I want a single server with Windows (I can install HPC Windows or Windows 2008 R2)?
RDMA over Ethernet actually shares some of the same requirements as Fibre Channel over Ethernet (FCoE) as far as lossless delivery and bandwidth guarantees are concerned. Many of the switches that support elements of the various Data Center Bridging (DCB) standards are also appropriate for RDMA. At a minimum you'll likely need Priority Flow Control (PFC) but depending on the stack in use you may require jumbo frame support. The other mechanisms in DCB (bandwidth reservation, congestion notification, etc) are also hugely helpful if supported. These features tend to be found in higher end 10GE switches - specifically those that can be had with enterprise storage features.
There's nothing theoretically to stop you from running RDMA over Ethernet on a generic switch, of course, but any kind of glitch (i.e. the sorts of things the gear above is supposed to stop) is going to get pretty bad pretty quick..