It sounds obvious that a faster connection lowers latency... But I wonder: I am working remotely on a host the other side of the world - light can only travel so fast (1 foot in a nano second) and we both have broadband connections in excess of 1,000kbps upload and 10,000kbps download:
Will a higher bandwidth connection lower the time it takes to ping?? Since it is very little data how would a faster connection help? Currently ping takes 450ms is there any way I can improve it??
First, Bandwidth is not the same as latency. A faster connection won't necessarily reduce your latency. 450ms does seem a little slow but not that far off if you are going 1/2 way across the world. As a frame of reference a high speed, low latency link will take ~70-80ms to cross the US. You might be able to eek a bit less latency by changing your provider assuming they have a more optimal peering path. but I can't promise anything.
A "faster" connection (as you're referring to it) doesn't lower latency. A "faster" connection allows more data to be placed on the wire in a given period of time.
Bandwidth is a measure of capacity.
Latency is a measure of delay.
EDIT
Here's an example of the difference between bandwidth and latency: Imagine 2 internet connections, one 10Mbps and the other 1 Mbps. Both have latency of 50ms. Now imagine that I'm sending keystrokes to a remote terminal on the other end of those connections. For the sake of simplicity lets say that each keystroke consumes 1 Mbps of bandwidth. On the 10Mbps connection I'm able to send the letters A, B, C, D, E, F, G, H, I, J at the same time, so they all arrive at the remote terminal 50ms later and are echoed on the screen... at the same time. Now on the 1Mbps connection each keystroke is sent independently because each keystroke consumes all of the available bandwidth. So the letter A is sent, and then 50ms later it's received by the remote terminal and echoed on the screen, followed by the letter B 50ms after that, then the letter C... all the way to the letter J. It would take 500ms for all ten letters to be received on the remote terminal and to be echoed to the screen. Is the 10Mbps connection faster? No it isn't. It's latency is 50ms just like the 1Mbps connection. It appears faster due to the fact that it has higher throughput (bandwidth) and more data can be placed on the wire at one time. That's the difference between bandwidth (capacity) and latency (delay). In the strict sense, a "faster" connection (in the way you're referring to it) will not reduce latency.
Connections are measured in two primary factors, latency and bandwidth. There is no such thing as "high speed" or "faster". Those are marketing doublespeak and are meaningless in the context of professionally managed connections.
I have a point to say here related to ping.
Usually, the ICMP traffic is not given high priority. So, measuring the network delay/latency will not be accurate using ping or any other icmp-based traffic.
The delay between two points can be calculated using the formula:
Transmission delay is the time to push the packet bits on the wire. Propagation delay is related to the medium and is the time to reach the destination. Processing delay is related to the receiving and sending machines/routers.
Often it will, yes. But the two aren't the same thing and aren't directly linked. It just happens that typically connections with more bandwidth also have lower latency due to the technology being used.
But it's not always true. Consider a fast method to transfer massive amounts of data: filling up 12 2TB hard drives with data and sending them by courier. The data transfer rate is VERY high (over 2000MBps given that you can send 24TB in 24 hours). The latency is also very high (24 hours). Dialup has a much lower latency then that, but it'd take years to send 24TB over dialup.
It's not a good idea to directly equate the two. If you specifically need lower latency, you should ask about that specifically and not shop by bandwidth.
The only real solution for improving your latency is to shorten the number of hops in between the two hosts in question.
If you are a big-enough corporate customer, you should be able to open a dialogue with your telecommunications providers on both ends about taking a shorter (possibly costlier) IP route between the two sites.
You have done a lot of positing without gathering facts. Your best bet is to try to identify the source of high latency: where does it start? Then you can try to answer the question of: how do I fix it?
Run a traceroute, or better yet, mtr (mytraceroute). If you're on Windows, you can use winmtr. PingPlotter is also a good tool for this.
Find where your high latency starts, then work to fix it. Throwing more bandwidth at your problem isn't the answer.
Higher bandwidth will not help much, unless bulk data is drowning out interactive data. If both sides used fibre instead of xDSL/Cable/Wireless, that might you shave 20-80ms on your RTT.
Do a ping test using pingtest.net to determine the quality of each link. Latency is important, but /jitter/ can make a huge difference as well. I would much rather have a slower (3 Mbps) connection without jitter then a faster (eg 15 Mbps) connection with jitter.
For TCP connections (e.g. SSH, telnet etc), some TCP tuning can help.
You can also look at using a TCP accelerator; there are commercial ones but pepsal can make a difference already.
Perhaps your firewalls/routers are the issue...
only way to REALLY tell where the breakdown is is by doing a traceroute, as was stated above,
There are many different answers to this question, and the correct answer (in my opinion) is "It depends".
It doesn't matter if you have a 1gbit/s connection if it's saturated. TCP (and other protocols) rely on transmission checks wich in 99% of cases are not prioritized correctly with QoS or similar technologies.
Symmetrical (SDSL, fibre etc) lines are generally better suited for low-latency operations, as they do not share RX with TX (wich means that TCP ACK's, ICMP replies etc won't get hindered if you are downloading at full blaze). It still requires QoS to guarantee traffic for sensitive applications (VoIP in particular).
Surprisingly, the number of hits (and quality of hits) on google when it comes to prioritizing TCP ACK's are quite thin.. talk to any networking expert, and they'll know why you need this.