I have a webserver with a current connection of 100Mbit and my provider offers an upgrade to 1Gbit. I understand that this refers to throughput but the larger the packets, the faster they can be transmitted as well, so I would expect a slight decrease in response time (e.g. ping). Did anybody ever benchmarked this?
Example (100mbit to 100mbit server) with 30 byte load:
> ping server -i0.05 -c200 -s30
[...]
200 packets transmitted, 200 received, 0% packet loss, time 9948ms
rtt min/avg/max/mdev = 0.093/0.164/0.960/0.093 ms
Example (100mbit to 100mbit server) with 300 byte load (which is below MTU):
> ping server -i0.05 -c200 -s300
[...]
200 packets transmitted, 200 received, 0% packet loss, time 10037ms
rtt min/avg/max/mdev = 0.235/0.395/0.841/0.078 ms
So from 30 to 300 the avg. latency increaces from 0.164 to 0.395 - I would expect this to be a slower increase for a 1gibt to 1gbit connection. I even would expect 100mbit to 1gbit to be faster, if the connection is through a switch which first waits until it received the whole packet.
Update: Please read the comments to the answers carefully! The connection is not saturated, and I don't think that this speed increase will matter for humans for one request, but it is about many requests which add up (Redis, Database, etc.).
Regarding answer from @LatinSuD:
> ping server -i0.05 -c200 -s1400
200 packets transmitted, 200 received, 0% packet loss, time 9958ms
rtt min/avg/max/mdev = 0.662/0.866/1.557/0.110 ms