I've been troubleshooting a severe WAN speed issue. I fixed it, but for the benefit of others:
Via WireShark, logging, and simplifying the config I narrowed it down to some strange behaviour from a gateway doing DNAT to servers on the internal network. The gateway (a CentOS box) and servers are both running in the same VMware ESXi 5 host (and this turns out to be significant).
Here is the sequence of events that happened - quite consistently - when I attempted to download a file from an HTTP server behind the DNAT, using a test client connected directly to the WAN side of the gateway (bypassing the actual Internet connection normally used here):
The usual TCP connection establishment (SYN, SYN ACK, ACK) proceeds normally; the gateway remaps the server's IP correctly both ways.
The client sends a single TCP segment with the HTTP GET and this is also DNATted correctly to the target server.
The server sends a 1460 byte TCP segment with the 200 response and part of the file, via the gateway. The size of the frame on the wire is 1514 bytes - 1500 in payload. This segment should cross the gateway but doesn't.
The server sends a second 1460 byte TCP segment, continuing the file, via the gateway. Again, the link payload is 1500 bytes. This segment doesn't cross the gateway either and is never accounted for.
The gateway sends an ICMP Type 3 Code 4 (destination unreachable - fragmentation needed) packet back to the server, citing the packet sent in Event 3. The ICMP packet indicates the next hop MTU is 1500. This appears to be nonsensical, as the network is 1500-byte clean and the link payloads in 3 and 4 already were within the stated 1500 byte limit. The server understandably ignores this response. (Originally, ICMP had been dropped by an overzealous firewall, but this was fixed.)
After a considerable delay (and in some configurations, duplicate ACKs from the server), the server decides to resend the segment from Event 3, this time alone. Apart from the IP identification field and checksum, the frame is identical to the one in Event 3. They are the same length and the new one still has the Don't Fragment flag set. However, this time, the gateway happily passes the segment on to the client - in one piece - instead of sending an ICMP reject.
The client ACKs this, and the transfer continues, albeit excruciatingly slowly, since subsequent segments go through roughly the same pattern of being rejected, timing out, being resent and then getting through.
The client and server work together normally if the client is moved to the LAN so as to access the server directly.
This strange behaviour varies unpredictably based on seemingly irrelevant details of the target server.
For instance, on Server 2003 R2, the 7MB test file would take over 7h to transmit if Windows Firewall was enabled (even if it allowed HTTP and all ICMP), while the issue would not appear at all, and paradoxically the rejection would never be sent by the gateway in the first place if Windows Firewall was disabled. On the other hand, on Server 2008 R2, disabling Windows Firewall had no effect whatsoever, but the transfer, while still being impaired, would occur much faster than on Server 2003 R2 with the firewall enabled. (I think this is because 2008 R2 is using smarter timeout heuristics and TCP fast retransmission.)
Even more strangely, the problem would disappear if WireShark were installed on the target server. As such, to diagnose the issue I had to install WireShark on a separate VM to watch the LAN side network traffic (probably a better idea anyway for other reasons.)
The ESXi host is version 5.0 U2.
You can't drop ICMP fragmentation required messages. They're required for pMTU discovery, which is required for TCP to work properly. Please LART the firewall administrator.
This is a configuration that has been recognized as fundamentally broken for more than a decade. ICMP is not optional.
I finally got to the bottom of this. It turned out to be an issue with VMware's implementation of TCP segmentation offloading in the virtual NIC of the target server.
The server's TCP/IP stack would send one large block along to the NIC, with the expectation that the NIC would break this into TCP segments restricted to the link's MTU. However, VMware decided to leave this in one large segment until - well, I'm not sure when.
It seems it actually stayed one large segment when it reached the gateway VM's TCP/IP stack, which elicited the rejection.
An important clue was buried in the resulting ICMP packet: the IP header of the rejected packet indicated a size of 2960 bytes - way larger than the actual packet it appeared to be rejecting. This is also exactly the size a TCP segment would be on the wire if it had combined the data from both of the segments sent thus far.
One thing that made the issue very hard to diagnose was that the transmitted data actually was split into 1500-byte frames as far as WireShark running on another VM (connected to the same vSwitch on a separate, promiscuous port group) could see. I'm really not sure why the gateway VM saw one packet while the WireShark VM saw two. FWIW, the gateway doesn't have large receive offload enabled - I could understand if it did. The WireShark VM is running Windows 7.
I think VMware's logic in delaying the segmentation is so that if the data is to go out a physical NIC, the NIC's actual hardware offload can be leveraged. It does seem buggy, however, that it would fail to segment before sending into another VM, and inconsistently, for that matter. I've seen this behaviour mentioned elsewhere as a VMware bug.
The solution was simply to turn off TCP segmentation offloading in the target server. The procedure varies by OS but fwiw:
In Windows, on the connection's properties, General tab or Networking tab, click "Configure..." beside the adapter, and look on the Advanced tab. For Server 2003 R2 it's given as "IPv4 TCP Segmentation Offload." For Server 2008 R2 it's "Large Send Offload (IPv4)."
This solution is a bit of a kludge and could conceivably impact performance in some environments, so I'll still accept any better answer.
I had the same symptoms and the problem turned out to be this kernel bug: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=754294
I have seen the same issue on Linux hosts.
The solution was to deactivate Large Receive Offload (LRO) on the network driver (vmxnet) of the gateway machine.
To quote the VMware KB:
See http://kb.vmware.com/kb/2055140
Thus, packets arriving on the gateway machine were merged by the network driver and sent to the network stack, that dropped them as bigger than the MTU...