TLDR = I have proven data transfer from virtual machines to physical machines is a fraction of what it is from physical machine to physical machine. ESXi boasts "gigabit adapters" so, it seems it should at least go the max speed that can be written / cached on the other end. I want to increase the speed so my backups that run inside these VMs complete in a timely fashion.
Details:
I have performed three tests to verify a speed problem, but I am not sure where the problem lies.
Transferring a test file of 3GB, I have performed the following tests: 1. Robocopy the file from GuestOS -> Storage Server RAID array over SMB (12MBps)
Robocopy the file from GuestOS -> Virtual hard disk, which resides on storage server, and is connected over NFS (15MBps)
Robocopy the file from GuestOS -> External hard drive on storage server, which is USB 3.0 with a larger cache buffer (10MBps)
Rsync the file from the RAID array to the USB 3.0 drive (within the same server) (56MBps)
Rsync the file from another physical server to the storage server external drive (55MBps)
It's clear that copying the files from within the GuestOS over the gigabit network to the storage server over SMB, NFS, or any other technology is orders of magnitude slower than from one physical box to another.
Question is: why? Is it fixable?
This has become important because backups of data on the VMs has reached a considerable size, and is now taking too long to complete. We start them at night, and they still haven't finished by morning when people start showing up for work.
Hardware: HP Proliant ML110 G6 Tower
VMNetwork adapter is e1000
0 Answers