At the moment we're trying to decide whether to move our datacenter from the west coast to the east coast.
However, I am seeing some disturbing latency numbers from my west coast location to the east coast. Here's a sample result, retrieving a small .png logo file in Google Chrome and using the dev tools to see how long the request takes:
- West coast to east coast:
215 ms latency, 46 ms transfer time, 261 ms total - West coast to west coast:
114 ms latency, 41 ms transfer time, 155 ms total
It makes sense that Corvallis, OR is geographically closer to my location in Berkeley, CA so I expect the connection to be a bit faster.. but I'm seeing an increase in latency of +100ms when I perform the same test to the NYC server. That seems .. excessive to me. Particularly since the time spent transferring the actual data only increased 10%, yet the latency increased 100%!
That feels... wrong... to me.
I found a few links here that were helpful (through Google no less!) ...
- Does routing distance affect performance significantly?
- How does geography affect network latency?
- Latency in Internet connections from Europe to USA
... but nothing authoritative.
So, is this normal? It doesn't feel normal. What is the "typical" latency I should expect when moving network packets from the east coast <--> west coast of the USA?
Speed of Light:
You are not going beat the speed of light as an interesting academic point. This link works out Stanford to Boston at ~40ms best possible time. When this person did the calculation he decided the internet operates at about "within a factor of two of the speed of light", so there is about ~85ms transfer time.
TCP Window Size:
If you are having transfer speed issues you may need to increase the receiving window tcp size. You might also need to enable window scaling if this is a high bandwidth connection with high latency (Called a "Long Fat Pipe"). So if you are transferring a large file, you need to have a big enough receiving window to fill the pipe without having to wait for window updates. I went into some detail on how to calculate that in my answer Tuning an Elephant.
Geography and Latency:
A failing point of some CDNs (Content Distribtuion Networks) is that they equate latency and geography. Google did a lot of research with their network and found flaws in this, they published the results in the white paper Moving Beyond End-to-End Path Information to Optimize CDN Performance:
BGP Peerings:
Also if you start to study BGP (core internet routing protocol) and how ISPs choose peerings, you will find it is often more about finances and politics, so you might not always get the 'best' route to certain geographic locations depending on your ISP. You can look at how your IP is connected to other ISPs (Autonomous Systems) using a looking glass router. You can also use a special whois service:
It also fun to explore these as peerings with a gui tool like linkrank, it gives you a picture of the internet around you.
This site would suggest around 70-80ms latency between East/West coast US is typical (San Francisco to New York for example).
Here are my timings (I'm in London, England, so my West coast times are higher than East). I get a 74ms latency difference, which seems to support the value from that site.
These were measured using Google Chrome dev tools.
Measure with ICMP first if at all possible. ICMP tests typically use a very small payload by default, do not use a three-way handshake, and do not have to interact with another application up the stack like HTTP does. Whatever the case, it is of the utmost importance that HTTP results do not get mixed up with ICMP results. They are apples and oranges.
Going by the answer of Rich Adams and using the site that he recommended, you can see that on AT&T's backbone, it takes 72 ms for ICMP traffic to move between their SF and NY endpoints. That is a fair number to go by, but you must keep in mind that this is on a network that is completely controlled by AT&T. It does not take into account the transition to your home or office network.
If you do a ping against careers.stackoverflow.com from your source network, you should see something not too far off of 72 ms (maybe +/- 20 ms). If that is the case, then you can probably assume that the network path between the two of you is okay and running within normal ranges. If not, don't panic and measure from a few other places. It could be your ISP.
Assuming that passed, your next step is to tackle the application layer and determine if there is anything wrong with the additional overhead you are seeing with your HTTP requests. This can vary from app to app due to hardware, OS, and application stack, but since you have roughly identical equipment on both the East and West coasts, you could have East coast users hit the West coast servers and West coast users hit the East coast. If both sites are configured properly, I would expect to see all numbers to be more less equal and to therefore demonstrate that what you are seeing is pretty much par for the coarse.
If those HTTP times have a wide variance, I would not be surprised if there was a configuration issue on the slower performing site.
Now, once you are at this point, you can attempt to do some more aggressive optimization on the app side in order to see if those numbers can be reduced at all. For example, if your are using IIS 7, are you taking advantage of its caching capabilities, etc? Maybe you could win something there, maybe not. When it comes to tweaking low-level items such as TCP windows, I am very skeptical that it would have much of an impact for something like Stack Overflow. But hey - you won't know until you try it and measure.
Several of the answers here are using ping and traceroute for their explanations. These tools have their place, but they are not reliable for network performance measurement.
In particular, (at least some) Juniper routers send processing of ICMP events to the control plane of the router. This is MUCH slower than the forwarding plane, especially in a backbone router.
There are other circumstances where the ICMP response can be much slower than a router's actual forwarding performance. For instance, imagine an all-software router (no specialized forwarding hardware) that is at 99% of CPU capacity, but it is still moving traffic fine. Do you want it to spend a lot of cycles processing traceroute responses, or forwarding traffic? So processing the response is a super low priority.
As a result, ping/traceroute give you reasonable upper bounds - things are going at least that fast - but they don't really tell you how fast real traffic is going.
In any event -
Here's an example traceroute from the University of Michigan (central US) to Stanford (west coast US). (It happens to go by way of Washington, DC (east coast US), which is 500 miles in the "wrong" direction.)
In particular, note the time difference between the traceroute results from the wash router and the atla router (hops 7 & 8). the network path goes first to wash and then to atla. wash takes 50-100ms to respond, atla takes about 28ms. Clearly atla is further away, but its traceroute results suggest that it's closer.
See http://www.internet2.edu/performance/ for lots of info on network measurement. (disclaimer, i used to work for internet2). Also see: https://fasterdata.es.net/
To add some specific relevance to the original question... As you can see I had an 83 ms round-trip ping time to stanford, so we know the network can go at least this fast.
Note that the research & education network path that I took on this traceroute is likely to be faster than a commodity internet path. R&E networks generally overprovision their connections, which makes buffering in each router unlikely. Also, note the long physical path, longer than coast-to-coast, although clearly representative of real traffic.
michigan->washington, dc->atlanta->houston->los angeles->stanford
I'm seeing consistent differences, and I'm sitting in Norway:
This was measured with the scientific accurate and proven method of using the resources view of Google Chrome and just repeatedly refreshing each link.
Traceroute to serverfault
Traceroute to careers
Unfortunately, it now starts going into a loop or whatnot and continues giving stars and timeout until 30 hops and then finishes.
Note, the traceroutes are from a different host than the timings at the start, I had to RDP to my hosted server to execute them
everyone here has some really good point. and are correct in their own POV.
And it all comes down to there is no real exact answer here, because there are so many variable any answer given can always be proven wrong just by changing one of a hundred variables.
Like the 72ms NY to SF latency is the latency from PoP to PoP of a carrier of a packet. This does not take into account any of the other great points that some have pointed out here about congestion, packet loss, quality of service, out of order packets, or packet size, or network rerouting just between the perfect world of the PoP to PoP.
And then when you add in the last mile (generally many miles) from the PoP to your actual location within the two cities where all of these variable become much more fluid thing start to exponentially escalate out of reasonable guess-ability!
As an example I ran a test between NY city and SF of the course of a business day. I did this on a day were there was no major "incidents" occurring around the world that would cause a spike in traffic. So maybe this was not average in todays world! But nonetheless it was my test. I actually measured from one business location to another over this period, and during normal business hours of each coast.
At the same time, I monitored the circuit providers numbers on the web.
The results were latency numbers between 88 and 100 ms from door to door of the business locations. This did not include any inter office network latency numbers.
The service provider networks latency ranged between 70 and 80 ms. Meaning the last mile latency could have ranged between 18 and 30 ms. I did not correlate the exact peaks and lows between the two environments.
I see approx 80-90ms latency on well run, well measured links between East and West coasts.
It would be interesting to see where you're gaining latency - try a tool like layer-four traceroute (lft). Chances are a lot of it is gained on the "last mile" (i.e. in your local broadband provider).
That the transfer time was only slightly impacted is to be expected - packet loss and jitter are more useful measurements to look at when investigating transfer time differences between two locations.
Just for fun, when I played the online game Lineage 2 NA release from within Europe:
The difference seems to support that up to 100ms is within reason, considering the unpredictable nature of the internet.
Using the acclaimed Chrome refresh test, I get document load time that differs with roughly 130ms.
NYC Timings:
Using Chrome, on a residential connection.
Using lft from a VPS in a datacenter in Newark, New Jersey: