I am testing IPv6 latency from a Linux box, and I noticed this weird difference between IPv4 ping and IPv6 ping:
# ping -n -A -q -c 500 speedtest.steffann.nl
PING speedtest.steffann.nl (10.3.10.20) 56(84) bytes of data.
--- speedtest.steffann.nl ping statistics ---
500 packets transmitted, 500 received, 0% packet loss, time 240ms
rtt min/avg/max/mdev = 0.297/0.364/7.213/0.317 ms, ipg/ewma 0.481/0.358 ms
Average rtt is 0.364, the count is 500, so that accounts for 182ms. The runtime of 240ms is a bit higher, but that is not a surprising amount of overhead. Now the IPv6 ping:
# ping6 -n -A -q -c 500 speedtest.steffann.nl
PING speedtest.steffann.nl(fd9c:262f:e839:310::20) 56 data bytes
--- speedtest.steffann.nl ping statistics ---
500 packets transmitted, 500 received, 0% packet loss, time 5000ms
rtt min/avg/max/mdev = 0.508/0.751/2.197/0.254 ms, pipe 2, ipg/ewma 10.021/0.725 ms
The rtt is about 2 times as long, so I would also expect a runtime of about 2 times as long. But it's more than 20 times as long. And it is exactly 10ms per ping...
It is probably an implementation artifact somewhere. Does anybody know where this comes from?