I have a new Server 2022 Hyper-V host that will have (initially) 6 guests. It has two NICs that I want to team to connect to the physical switch for performance and reliability.
My understanding is there are two ways to do this:
- Old-and-busted LACP teaming at the OS level from 2012.
- New-hotness Switch Embedded Teaming for Server 2016 and later.
I was able to make both options work in setup and testing so far, but everything I've read says I should strongly prefer option 2.
Here's the issue: from what I've seen for option 2, Hyper-V does load balancing by binding MACs from different guests to the different physical ports. This means one VM will never have more bandwidth than you'd see on a single port. And indeed, the interface visible in the guests when using this option looks like a single NIC with the speed of the one of the individual host NICs.
This is wrong for my setup, because one of the guests is not like the others. It's the main reason for the physical hardware to exist and should see the lion's share of the traffic. I know that with most teaming options no one session can exceed the speed of a single port, but I also expect this server to support many sessions.
So the question:
Did I miss something for Switch Embedded Teaming to make better use of the available ports, and if so what do I need to do to correct it? Or alternatively, is this a situation where I should use the older teaming option?
Found my own answer:
Somehow the load balancing mode for the VSwitch was set as
HyperVPort
... probably the result of also experimenting with LACP. I set it toDynamic
with this command:Now things are better using the newer SET mode; I didn't even need to re-assign or reconnect network interfaces for the VMs.