Mastering Windows Server 2016 Hyper-V

(Romina) #1

SriovSupport : Supported
SwitchName : "Default Switch"
NumVFs : 32


Name : VM NIC 2
InterfaceDescription: Mellanox ConnectX-3 Pro Ethernet Adapter #2
Enabled : True
SriovSupport : Supported
SwitchName : "Default Switch"
NumVFs: 32


The reality right now is that not many systems are SR-IOV capable. SR-IOV will be
used in targeted scenarios, because in most situations, the standard Hyper-V network
capabilities via the virtual switch will suffice for even the most demanding workloads.
SR-IOV is targeted at those very few highest networking throughput needs. The other
common place where SR-IOV implementations can be found is in “cloud in a box”
solutions, where a single vendor supplies the servers, the network, and the storage.
The one I have seen most commonly is the Cisco UCS solution that leverages SR-IOV
heavily because many network capabilities are implemented using Cisco’s own
technology, VM-FEX. An amazing multipart blog is available from Microsoft on SR-
IOV; it will tell you everything you could ever want to know:


[http://blogs.technet.com/b/jhoward/archive/2012/03/12/ everything-you-wanted-to-](http://blogs.technet.com/b/jhoward/archive/2012/03/12/ everything-you-wanted-to-)
know-about-sr-iov-in-hyper-v-part-1.aspx


VMQ


A technology that’s similar to SR-IOV is Virtual Machine Queue (VMQ). VMQ, which
was introduced in Windows Server 2008 R2, allows separate queues to exist on the
network adapter, with each queue being mapped to a specific virtual machine. This
removes some of the switching work on the Hyper-V switch, because if the data is in
this queue, the switch knows that it is meant for a specific virtual machine.


The bigger benefit is that because there are now separate queues from the network
adapter, that queue can be processed by a different processor core. Typically, all the
traffic from a network adapter is processed by a single processor core to ensure that
packets are not processed out of sequence. For a 1Gbps network adapter, this may be
fine, but a single core could not keep up with a loaded 10Gbps network connection
caused by multiple virtual machines. With VMQ enabled, specific virtual machines
allocate their own VMQ on the network adapter, which allows different processor
cores in the Hyper-V host to process the traffic, leading to greater throughput.
(However, each virtual machine would still be limited to a specific core, leading to a
bandwidth cap of around 3Gbps to 4Gbps; but this is better than the combined traffic
of all VMs being limited to 3Gbps to 4Gbps.)


The difference between VMQ and SR-IOV is that the traffic still passes through the
Hyper-V switch with VMQ, because all VMQ presents are different queues of traffic
and not entire virtual devices. In Windows Server 2008 R2, the assignment of a VMQ
to a virtual machine was static; typically, first come, first served because each NIC

Free download pdf