I have a Hyper-V environment with more than 8 VLANs, and I need to have a VM with an interface in each VLAN (a virtual firewall). Apparently I cannot create a VM with more than 8 "Network Interface"s, and have to resort adding "Legacy network interface"s for remaining VLANs. Given that Hyper-V's virtual switch ports are all access, or at least that's what I am aware of, I cannot specify a trunk port for one of the adapters then configure guest to use that port for all VLANs (or at least some VLANs - better have one adapter for internal networks and one for external networks). So, some VLANs are limited to emulated 100mbps traffic. While I can currently assign VLANs so that all production traffic won't get hit with this limitation, I'd like to know this for the future:
How to configure a VM's network adapter so that it will work as trunk, or how to add more gigabit network adapters to a VM in Hyper-V 2012R2?
I would like to propose two different innovate approaches which would be used if both carrier-grade performance and future scalability was a requirement. Both methods will alleviate your performance and vLAN limitation issues.
SR-IOV Approach:
You could use SR-IOV provided that you have an SR-IOV enabled network card. You can easily enable SR-IOV from the Hyper-V manager within the Virtual Switch manager.
This should theoretically give you native speed NIC performance thanks to the VMbus bypass, however do be aware that this method relies on a hardware dependencies, which can going against some of the key beneficial concepts of NFV and virtualization, for this reason I would also suggest the next approach :).
I have also listed supported NICs at the bottom of this answer.
OvS + DPDK Approach:
The next method I will suggest would be to absorb the functionality from Hyper-V Switch whilst also providing a considerable boast to DataPlane performance. By enabling OpenvSwitch (OvS) on the VMM/host layer. This would enable virtualization of the Switch layer and provide extra functionality such as a distributed switching for off the system scaling and switching..obviously, this can be extremely useful to implement at an earlier stage as opposed to later, thus greatly reduce scaling complexity and providing you with a modern infrastructure setup (your friends and colleagues will be in awe)!
Next is the DPDK element; DPDK is a userspace Poll Mode Driver (PMD) used for bypassing the slow and interruption-based linux networking stack (which was not designed with virtualizaiton in mind). There's plenty of documentation out their on the web about DPDK and OvS + DPDK.
By limiting the IRQ's with the PMD and bypassing the Linux Kernal network stack you will gain a formidable jump in VM NIC performance whilst gaining more and more functionality giving you better control of the virtual infrastructure, this is the way modern networks are being deployed right now.
SR-IOV supported NICs: