I recently setup a new environment consisting of:
- QSAN Storage with 10Gib network
- Mellanox switches 10Gib
- 4 x Physical nodes connect to LAN and SAN 10Gib
The physical hosts are connected using MPIO to the SAN storage, performance tests were done on all physical servers to the SAN and show 8K random write 200MB/s for a single SSD (which is present as a CSV in the cluster). The test was conducted using diskspd.
Now I created a Hyper-V machine on the Cluster Shared Volume and tested diskspd inside the virtual machine: 8k random write: 0.5MB/s
When checking latency to disk inside the Hyper-V guest I see values like 10 seconds.
I'm quite at a loss why that is happening. I guess it's not the SAN storage, nor ISCSI or MPIO setup as I get the results I would expect when doing the test on the physical host. So there must be something wrong with the Hyper-V configuration.
I'm doing the test on the C: drive in the Hyper-V guest, which is a fixed size IDE drive (as SCSI will not be able to boot). The SAN volume is formatted using 64k...
CSV is owned by the same host as the Hyper-V guest,...
Update: The Guest-VM is Generation 1 unfortunately.
If you use Broadcom NICs, try to disable VMQ on a virtual switches and on a physical network adapters: http://www.dell.com/support/article/ua/ru/uabsdt1/SLN132131/EN
You might also check if there are the latest hyper-v drives installed.
Besides, are your VHDX files formatted to NTFS?
One more thing that comes to mind is MTU. Try to change it to 9000. But usually it gives just a little increase of performance.