I noticed between two servers I get about 500Mbps of throughput and between two others I get about 25Mbps. While it may not be related to my issue (which I'm not interested in solving for the purposes of this question), this got me thinking about how you could have a 1Gbps port speed at a data center but if it's 20 low quality hops from any given end user or other servers (such as because of being on a carrier with very poor peering), then you're more apt to get subpar performance.
I was wondering if anyone is aware of a good way to test the quality of a server's connection with regards to this? I'm thinking maybe a website that you visit and the server uploads and downloads data and measures latency and jitter to 100 different machines that represent a variety of major ISPs and data center carries - almost like a speedtest.net x100.
Ignoring your off-topic product recommendation request (and yes, there are services that do geographically distributed performance measurements) :
A user with 9600 baud dial-in connection will never get a download faster than that, regardless of whatever you datacenter uplink is...
The maximum bandwidth between your server and your user is determined by the slowest segment.
You can typically only control part of the network segments between your users and your server(s).
Even in the virtual world actual geographical distances matter. Especially since the network route traffic can take rarely follows the shortest path over the globe; there are only that many cables.
With a single server you will always be closer (fewer network hops, shorter distance, lower latency, more bandwidth) to some users and further from others.
The solution to that is to not rely on a single server but use a content delivery network.