Say I had two servers which needed super-low latency (Database, file etc.). Would it be possible to directly connect the two servers with 10GbE, so each server had 1 (in the real world it would have 2) connections to the 'main' network, but 1 network card with an ethernet cable that connected directly to the second server, no switches or routers, just a direct connection
Internet/Datacenter
|
|
|
|
|
|
|
--------------------
| |
------------| Switch |-----------
| | | |
| -------------------- |
| |
| |
| |
| |
| |
| |
| |
Network Card 1 (eth0) Network Card 1 (eth0)
| |
-------------------- --------------------
| | | |
| Server 1 | | Server 2 |
| | | |
-------------------- --------------------
| |
Network Card 2 (eth1) Network Card 2 (eth1)
| |
| |
| Direct 10GbE |
-------------------------------------------
My first question is, would this even be possible? Would they need any unusual/special services configured to let them talk over this network other than a standard file in /etc/sysconfig/network-scripts/
? They would both have static IPs on eth1 but how would things like routing work? I'm not an expert on networking so this is probably a n00b-ish question
Second question, is there any point? Would there be any advantages doing this over just letting them communicate over the standard network connection via the switch, or giving them a second dedicated network just for communicating intra-server (Since bandwidth would be used on the standard network by clients accessing the servers). Assuming latency was the priority.
I know there are some issues with this method, like when we came to add a 3rd server we'd ether have to give every server another network card and probably set up some very complicated replication triangle thingy but since this is hypothetical lets ignore that.
And since latency is the key issue, would fiber be better over ethernet (Speed isn't important so long as it can do a couple of Gb/sec)
I phrased this question from a linux POV, because that's my background, but it could apply to any server/device
There's no reason why you technically can't do this.
I'd probably do something similar, under the circumstances, actually. From a purely linux point of view, it's really easy, just give the connection an IP address with a /30 bitmask, giving you 2 IP addresses, then it's a simple Point-to-Point link.
If you wanted to grow the network, you could get a 10GE switch, and then have a seperate VLAN for traffic between servers. There's some very shiny gear in the Force10 range of switches that can do line-rate 10GE switching, with enormous buffers.
I cant comment a Linux point of view but I will just use my knowledge and ask some more questions.
Are you that dependant on low latency and that you need to keep these servers in sync? Are they both running a database or something? 10GbE should suffice most needs of 2 servers to keep them in sync. I would soon rather spend the money on a decent switch in the middle of the 2 instead of going the route you are looking at.
You could with a decent switch tag these ports to prioritise the traffic and even QoS the traffic that needs to be real time.
My thoughts.
I have actually done this between two laptops. Most modern LAN adapters have auto negotiation between them so you can use a regular lan cable.
Set static IP addresses that are not in the same range as any other subnet you are using - for example, if my systems are on a 192.168.x.x subnet, i use a 10.0.0.x subnet between them. Otherwise, it should just work
Security vs. Performance vs. money.
If the back channel traffic is high and money is low, use a direct connection. It's done everyday and in many real world situations can have better performance than an already overloaded switch.
If the back channel traffic is low and security is medium or low, bond NICs to increase overall Internet throughput - two connections from each server to the internet, multi-home NICS to "isolate" replication traffic (Separate IP spaces makes it easier to firewall, audit, do packet trace diagnostics, etc.).
If security is high and plenty of money use a switch. Easier to expand. Easier to diagnose problems.
In the given scenario a switch purchase would not be warranted. Utilizing an existing switch with VLAN segmentation would possible make sense. Although, I can't see any reason to plug into the switch unless the servers are Co-lo'd, i.e., not physically accessible. It's a waste of two switch ports unless packet capture/debugging is active.
There is no advantage of using such setup. the switches today are lightening fast, so you never face any visible latency due to switch. and scalability would be a big issue for you as well. Also, there would be problem of setup of routing as well, as you will have to maintain TWO Separate networks instead of just one.