We have two datacentres either side of the Atlantic with a 10Gb VPLS connection between sites with 80ms latency and better than 0.0000001% packet loss.
When moving VMs between datastores on either end of the link we are seeing exremely slow speeds, EG: 15MB/s.
We have confirmed the underlying performance of the storage arrays and have confirmed all the networks involved are running at 10Gb with throughput and packet loss tests. Local data transfers within the same vCenter are very quick. We have run iperf between VMs on each DC as well.
I assume this is due to a TCP windowing issue or something similar to how SMB/CIFS struggles with high letency links.
Is there any configuration within ESXi or vCenter to optimise this such as specifying larger buffers or larger window sizes?
We are running 6.5 Ent Plus with vCenters in enhanced link mode. These are sperate clusters and do have a stretched VLAN.
0 Answers