I measured the load speed of a static image file severed by nginx from my server (using pingdom service). The server should normally process this request within few seconds. My server is located in Dallas, TX. When I used pingdom Dallas server, it took 200ms to load the file. This includes DNS resolving, and data transfer. Obviously for longer transfer, it should takes longer; but surprisingly this value was 800ms for NYC and 1.5s for Amsterdam.
The apparent conclusion is the role of distance for data transfer from server to client machine, and nothing connected to my server performance. However, when I checked this difference for major websites such as google and bing; the difference was only 50% (e.g. 200ms for US and 300s for Europe).
Is there something to be improved in my server to server long-distance requests?
Those large companies have geographically-dispersed data centres, so you're being responded to by a site that's closer to you rather than one central site - that's all it is.
+1 to Chopper3, but I'd also like to explain something.
Geographic distance has less to do with transfer than you would think.
What does matter:
Example 1: At my university, we get our internet through a state owned ISP. This ISP has it's own network and peers in chicago. Therefore, in order to connect to a house down the road, the connection needs to go all the way to chicago. Thus, it's much faster to communicate with something in chicago than to connect to something down the road.
I guess you need to think about it like roads rather than as the crow flies. Just because something is closer doesn't mean that it's really closer by internet. It also doesn't mean it's faster. This is extremely exaggerated with the internet. For instance, if one single "free way" between dallas and NYC is being saturated, it could be faster to transfer from europe to dallas.
You should try CDN services (Content Delivery Network). It helps to reduce latency for users which are located far from the server. For more information visit maxcdn website. You will find the necessary details regarding to your problem.
The keyword is latency. It depends on wether you need many small interactions to get to your content, or if there is just a big bulk transfer.
The former will be affected by latency, the latter not that much.
Example:
If there are 1000 small transactions with a latency of 0,01 s each you will need 10s for the transactions to finish. The constant factor is the payload divided through the available bandwidth. The "local" latency might be factor 10 smaller (0,001s latency) -> 1s for the whole bunch - that feels just about fast enough for a user.
Now if you can deliver the same payload with just 2 transactions the 0,02s won`t matter.
So the answer is: Reduce the number of tcp-tranactions needed to deliver your payload.
How big is your test file?
Don't forget the effect of TCP slow start means latency plays a big role in slowing down initial connections.
Have a read of "The Myth of Broadband" in http://www.bookofspeed.com/chapter3.html
These days depending on which version you're running, you can increase the initcwnd on both Linux and Windows server to reduce some of the latency effects.
Be aware that one imageload isn't representative of the whole site. The unavoidable transatlantic delay will affect each element loaded as an additional 90ms (YMMV) or so, not as a percentage of each request. This means larger items will not be any "worse" affected than smaller items.
Other than the first request, some of your requests will happen in parallel (eg. a borowser might load 3 or 4 of the images on your site and the CSS simultaneously), so these delays will be less felt by the user.
Having said that, CDNs are a good way to reduce these delays, and very easy to use for static assets like images and CSS. They're also not expensive.