TL;DR version: Turns out this was a deep Broadcom networking bug in Windows Server 2008 R2. Replacing with Intel hardware fixed it. We don't use Broadcom hardware any more. Ever.
We have been using HAProxy along with heartbeat from the Linux-HA project. We are using two linux instances to provide a failover. Each server has with their own public IP and a single IP which is shared between the two using a virtual interface (eth1:1) at IP: 69.59.196.211
The virtual interface (eth1:1) IP 69.59.196.211 is configured as the gateway for the windows servers behind them and we use ip_forwarding to route traffic.
We are experiencing an occasional network outage on one of our windows servers behind our linux gateways. HAProxy will detect the server is offline which we can verify by remoting to the failed server and attempting to ping the gateway:
Pinging 69.59.196.211 with 32 bytes of data: Reply from 69.59.196.220: Destination host unreachable.
Running arp -a
on this failed server shows that there is no entry for the gateway address (69.59.196.211):
Interface: 69.59.196.220 --- 0xa Internet Address Physical Address Type 69.59.196.161 00-26-88-63-c7-80 dynamic 69.59.196.210 00-15-5d-0a-3e-0e dynamic 69.59.196.212 00-21-5e-4d-45-c9 dynamic 69.59.196.213 00-15-5d-00-b2-0d dynamic 69.59.196.215 00-21-5e-4d-61-1a dynamic 69.59.196.217 00-21-5e-4d-2c-e8 dynamic 69.59.196.219 00-21-5e-4d-38-e5 dynamic 69.59.196.221 00-15-5d-00-b2-0d dynamic 69.59.196.222 00-15-5d-0a-3e-09 dynamic 69.59.196.223 ff-ff-ff-ff-ff-ff static 224.0.0.22 01-00-5e-00-00-16 static 224.0.0.252 01-00-5e-00-00-fc static 225.0.0.1 01-00-5e-00-00-01 static
On our linux gateway instances arp -a
shows:
peak-colo-196-220.peak.org (69.59.196.220) at <incomplete> on eth1 stackoverflow.com (69.59.196.212) at 00:21:5e:4d:45:c9 [ether] on eth1 peak-colo-196-215.peak.org (69.59.196.215) at 00:21:5e:4d:61:1a [ether] on eth1 peak-colo-196-219.peak.org (69.59.196.219) at 00:21:5e:4d:38:e5 [ether] on eth1 peak-colo-196-222.peak.org (69.59.196.222) at 00:15:5d:0a:3e:09 [ether] on eth1 peak-colo-196-209.peak.org (69.59.196.209) at 00:26:88:63:c7:80 [ether] on eth1 peak-colo-196-217.peak.org (69.59.196.217) at 00:21:5e:4d:2c:e8 [ether] on eth1
Why would arp occasionally set the entry for this failed server as <incomplete>? Should we be defining our arp entries statically? I've always left arp alone since it works 99% of the time, but in this one instance it appears to be failing. Are there any additional troubleshooting steps we can take help resolve this issue?
THINGS WE HAVE TRIED
I added a static arp entry for testing on one of the linux gateways which still didn't help.
root@haproxy2:~# arp -a
peak-colo-196-215.peak.org (69.59.196.215) at 00:21:5e:4d:61:1a [ether] on eth1
peak-colo-196-221.peak.org (69.59.196.221) at 00:15:5d:00:b2:0d [ether] on eth1
stackoverflow.com (69.59.196.212) at 00:21:5e:4d:45:c9 [ether] on eth1
peak-colo-196-219.peak.org (69.59.196.219) at 00:21:5e:4d:38:e5 [ether] on eth1
peak-colo-196-209.peak.org (69.59.196.209) at 00:26:88:63:c7:80 [ether] on eth1
peak-colo-196-217.peak.org (69.59.196.217) at 00:21:5e:4d:2c:e8 [ether] on eth1
peak-colo-196-220.peak.org (69.59.196.220) at 00:21:5e:4d:30:8d [ether] PERM on eth1
root@haproxy2:~# arp -i eth1 -s 69.59.196.220 00:21:5e:4d:30:8d
root@haproxy2:~# ping 69.59.196.220
PING 69.59.196.220 (69.59.196.220) 56(84) bytes of data.
--- 69.59.196.220 ping statistics ---
7 packets transmitted, 0 received, 100% packet loss, time 6006ms
Rebooting the windows web server solves this issue temporarily with no other changes to the network but our experience shows this issue will come back.
Swapping network cards and switches
I noticed the link light on the port of the switch for the failed windows server was running at 100Mb instead of 1Gb on the failed interface. I moved the cable to several other open ports and the link indicated 100Mb for each port that I tried. I also swapped the cable with the same result. I tried changing the properties of the network card in windows and the server locked up and required a hard reset after clicking apply. This windows server has two physical network interfaces so I have swapped the cables and network settings on the two interfaces to see if the problem follows the interface. If the public interface goes down again we will know that it is not an issue with the network card.
(We also tried another switch we have on hand, no change)
Changing network hardware driver versions
We've had the same problem with the latest Broadcom driver, as well as the built-in driver that ships in Windows Server 2008 R2.
Replacing network cables
As a last ditch effort we remembered another change that occurred was the replacement of all of the patch cords between our servers / switch. We had purchased two sets, one green of lengths 1ft - 3ft for the private interfaces and another set of red cables for the public interfaces. We swapped out all of the public interface patch cables with a different brand and ran our servers without issue for a full week ... aaaaaand then the problem recurred.
Disable checksum offload, remove TProxy
We also tried disabling TCP/IP checksum offload in the driver, no change. We're now pulling out TProxy and moving to a more traditional x-forwarded-for
network arrangement without any fancy IP address rewriting. We'll see if that helps.
Switch Virtualization providers
On the off chance this was related to Hyper-V in some way (we do host Linux VMs on it), we switched to VMWare Server. No change.
Switch host model
We've reached the end of our troubleshooting rope and are now formally involving Microsoft support. They recommended changing the host model:
- http://en.wikipedia.org/wiki/Host_model
- http://technet.microsoft.com/en-us/magazine/2007.09.cableguy.aspx
We did that, and we also got some unpublished kernel hotfixes which were presumably rolled into 2008 R2 SP1. No fix.
Replacing network card hardware
Ultimately, replacing the Broadcom network hardware with Intel network hardware fixed this issue for us. So I am inclined to think that the Broadcom Windows Server 2008 R2 drivers are at fault!
From http://linux-ip.net/html/ether-arp.html:
It looks like your gateway box is not responding (or responding too slowly) to ARP requests from your gateway box. Does that
<incomplete>
eventually switch to<failed>
? What network hardware do you have between the the server and the gateway? Is it possible broadcast ARP requests are being filtered or blocked somewhere between the two hosts?It means that you pinged the address, the IP has a PTR record (hence the name) but nothing responded from the machine in question. When we see this it's most commonly due to a subnet mask being set incorrectly - or in the case of IPs bound to a loopback interface that were accidentally bound to the eth interface instead.
What is 196.220? What is it's relationship with 196.211? I'm assuming that .220 is one of the HA Proxy hosts. When you run ifconfig -a & arp -a on it what does it show?
As Max Clark says, the <incomplete> just means that 69.59.196.211 has put out an ARP request for 69.59.196.220 and hasn't received a response yet. (In Windows-land you'll see this as an ARP mapping to "00-00-00-00-00-00"... It seems odd to me, BTW, that you're not seeing such an ARP mapping on 69.59.196.220 for 69.59.196.211.)
I tend not to like to use static ARP entries because, in my experience, ARP has generally done its job all the time.
If it were me, I'd sniff the appropriate Ethernet interface on the "failing" Windows machine (69.59.196.220) to observe it ARP'ing for 69.59.196.211, and to observe how / if it's responding to ARP requests from 69.59.196.211. I'd also consider sniffing on the gateway machine for ARP only (
tcpdump -i interface-name arp
) to see what the ARP traffic looks like from the side of the Linux machine.I know, from the blog, that you've got a back-end network and a front-end network. During these outages, does the "failing" Windows server (69.59.196.220) have any problems communicating to other machines in the front-end network, or is it just having problems talking to its gateway? I'm curious if you're coming at the failing machine thru the front-end or back-end network when you're catching it in the act.
What are you doing to "resolve" the issue when it occurs?
Edit:
I see from your update that you're rebooting the "failing" Windows machine to resolve the issue. Before you do that next time, can you verify that the Windows machine is able to "talk" on its front-end interface at all? Also, grab a copy of the routing table from the Windows machine (
route print
) during a failure, too. (I'm trying to ascertain if the NIC / driver is going bonkers on the Windows machine, basically.)This document shows the different states (table 2.1). Incomplete would mean that it has sent a first ARP request (presumably after a stale, delay, probe) but hasn't yet received a response.
The reason the static ARP on the haproxy node doesn't help is that your web server still can't figure out how to get back to the gateway.
Static ARP on the web server breaks the ability for your web servers to switch gateways when one of the haproxy nodes failed -- I'm guessing the virtual interface shares the same MAC address as the haproxy node's eth1, so you'd have to hard code to one of the two gateways into each web server.
Do you have any kind of security software installed on the failing web server? I spent a long night with a Windows 2008 server that had Symantec Endpoint Security on it -- it installs some filtering code in the networking stack that prevented it from seeing the gateway's ARP packets at all. The fix for that (as provided by Microsoft) was to remove the registry entry that loaded the DLL.
The other time this problem occurred, removing the whole network adapter from device manager and reinstalling seemed to help.
Since you've statically set your arp entry, your servers know where to find the gateway. However, if your switch doesn't know where the gateway is, it won't forward your packets.
Sounds like you've got a bad (or confused) switch between your HAproxy's and your web servers. Reboot it.
Either that, or your HAproxy servers disagree about which one is in control, and both answering arp lookups for .211.
Along the same lines, if your switch is overloaded, your HAproxies might be unable to communicate with each other fast enough, and are failing over.
The next time this problem occurs, I would suggest running some packet captures on the two hosts in question, to determine what ARP traffic each of them is observing.
Your HAproxy machine will most likely have some flavour of tcpdump installed. For the Windows machine you will either need a WinPCAP application, like Wireshark, or Microsoft Network Monitor.
In fact, thinking about it, as the problem appears to be with ARP specifically, you could potentially just continuously record all ARP traffic on the HAproxy machine and the Windows machine in question, with a rolling capture file of (for argument's sake) 10MB. That should be large enough such that by the time you've detected a failure, the capture file will still contain the ARP traffic from before the failure. (It's worth experimenting by running the capture for an hour or so, to see how much data it generates).
Example capture syntax for Linux tcpdump (note, I don't have a Linux box handy to test this on; please test the behaviour of -C and -W before using in production!):
This should hopefully give you some indication of what precisely is failing. When an ARP entry expires (and according to this article, newer versions of Windows appear to age out 'inactive' entries very aggressively), I would expect the following to happen:
Simple as it sounds, there are a bunch of other things that may interfere with this process:
Things to check if/when this happens again:
We had a similar issue with one of our 2008 R2 terminal servers where all traffic on the NIC would stop but stay connected, and the NIC LEDs would show comms. This was an ongoing issue that kept cropping up 2-3 times a week, but only after around 12-13 hours uptime (server is rebooted nightly).
I found Seriousbit Netbalancer was the cause, after I tried (out of curiosity) terminating the NetbalancerService service. Traffic then started moving across the interface. I've since uninstalled Netbalancer.
I had a same problem with Asus Mainboard lan. It was fixed by installing a latest driver from realtek website