My customer complains about low internet speeds. When measured with Speedtest.net speeds are acceptable. Periodic measured downloads are 10% to 30% of the nominal speed. I cannot explain that.
Some background. The problematic connection is on one of those sunny Caribbean islands where fast internet is not the greatest asset. Lately internet speeds became decent, up to 200 Mbps. But ping round trip to (say) Amsterdam is about 180 ms.
The customer has a 100 Mbps fiber connection. When carrying out a speedtest on a Windows machine (speedtest.net) to the ISP CO we obtain 95 Mbps. When using the same speed test to Amsterdam we reach 60-70 Mbs. Fully acceptable.
Some time ago I installed a RasPi which periodically wgets a file from one of my servers in Amsterdam. In a datacenter, which is directly connected to AMS-IX. Using this command:
wget -O /dev/null --report-speed=bits http://aserv.example.net/~myuser/links/M77232917.txt
The .txt file is 23MByte of numbers. (Actually it is the one but largest Mersenne Prime, 23e6 digits)
When I download that file on the problematic network, wget reports this:
dev/null 100%[====================================================================>] 22.81M 11.6Mb/s in 17s
2019-02-08 14:27:55 (11.2 Mb/s) - ‘/dev/null’ saved [23923322/23923322]
That is at the same time speedtest.net reports 60-70 Mbps.
I know that the Raspi has its limitations. But this speed varies wildly. One time the RasPi report this 11 Mbps, the next time 22 Mbps. But sometimes as low as 1.5 Mbps.
When I do this test with a really powerful laptop, top speeds are somewhat higher (up to 30 Mbps), but also show the same lows. So it indicates a RasPi limitation on the high side, but not the 10 Mbps on the low side.
I issued exactly the same command from a server in München, Germany in a datacenter. Speed 96 Mbps.
Then from a consumer 100 Mbps fiber connection in the Netherlands: 65 Mbps.
Then, at my home which has nominal 10 Mbps ADSL. Speedtest shows 10Mbps. Wget gives 8.5 Mbps. Which is equal in my book.
This precludes any limitation on the server which acts as host for the file download.
I do not expect that anyone can point out the cause of the slowness of the connection at the customer premises. But can anyone explain the discrepancy between the speedtest.net and the wget?
Is there something the speedtest ignores, or does it measure only the peaks? Or is wget seriously influenced by long ping times?
I feel that the wget test gives the real, effective speed, while speedtest is mainly to show the advertised speed.
ISPs often prioritize traffic to speedtest.net so that they can brag how fast their connections are, while in reality, they don't provide that much bandwidth. They're perfectly aware that most users will only check that site for confirmation.
You also have to keep in mind that transfer speed relies both on the client and the server. In today's world most servers throttle in one way or another.
Finally, it's pointless to expect stable bandwidth for overseas connections. There's just no such thing. It has to go through an infinite number of switches, fibers, datacenters to reach the final location. And all it takes is just one moving part to slow down.
In addition to the other reasons posted, TCP connections don't work well with large files when the bandwidth-delay product becomes large.
Like on an otherwise fast connection to an island.
See Wikipedia's entry on TCP tuning.
So Speedtest can dump a small file through the connection at 95 mb/sec, but
wget
can only get 10 mb/sec on a 20 MB file.wget
give good practical measure of the speed. The tests of Speedtest probably include kind of parallelism which can explain higher numbers.For good average speed test I think the time for download should be at least 90-120 seconds (to get good average)
One reason could be that often the maximum speed cannot be reached by just a single TCP connection.
Speedtest.net recently introduced a single connection mode. Try this and see if it makes a difference.
Then, for the download use for example aria2 with parameters to use multiple conections and compare. e.g.
aria2c -d /dev -o null --allow-overwrite=true --file-allocation=none --max-connection-per-server=8 --min-split-size=1M http://aserv.example.net/~myuser/links/M77232917.txt
Use Fast.com Internet Speed Test, this is a Netflix based speed test meaning it cannot be differentiated by ISPs from Netflix itself.
This is a more accurate test than any other test generally. People won't be worried about how fast a web page loads, but rather how quickly the videos buffer due to the increased bandwidth necessary to display a video.
ISPs often boost speeds based on the domain someone is connecting to if it's a speed test or using port 8080. Whereas Netflix uses port 80, a slower port when it is being prioritized.
Is it just me or did no one notice he said Mbps and the wget command list "MB/s".
60mbp/s and actually getting 11.2Mb's is normal.
Mbps and MB/s are two different speeds.
"a Megabit is 1/8 as big as a Megabyte, meaning that to download a 1 MB file in 1 second you would need a connection of 8 Mbps. " So 11mbx8=88mbps...11.2Mb is actually good for a connection reporting 60-70mbps.
Are people having memory lose awnering this. You'll never get 70mb/s with a speedtest speed of 70Mbps