Mastering Windows Server 2016 Hyper-V

(Romina) #1

minimum bandwidth allows maximum utilization of all bandwidth until there is
bandwidth contention between different workloads, at which time each workload is
limited to its relative allocation. For example, suppose that I have the following three
workloads:


Live    Migration:  MinumumBandwidthWeight  20
Virtual Machines: MinumumBandwidthWeight 50
Cluster: MinumumBandwidthWeight 30

Under normal circumstances, the virtual machines could use all of the available
bandwidth—for example, 10Gbps if the total bandwidth available to the switch was
10Gbps. However, if a Live Migration triggered and the virtual machines were using
all of the bandwidth, then the virtual machines would be throttled back to 80 percent
and the Live Migration traffic would be guaranteed 20 percent, which would be 2Gbps.
Notice that my weights add up to 100, which is not required but historically has been
highly recommended for manageability. However, we now live in a world with VMs
that are highly mobile and can move between hosts, so trying to keep totals to 100
would be impossible. What is important is to set relative values for the different tiers
of comparative bandwidth required and use that scheme throughout your entire
environment.


Although using this new converged methodology is highly recommended, there is one
caveat: the SMB 3 usage. SMB 3 has a feature named SMB Direct, which uses remote
direct memory access (RDMA) for the highest possible network speeds and almost no
overhead on the host. Additionally, SMB 3 has a feature called SMB Multichannel,
which allows multiple network connections between the source and target of the SMB
communication to be aggregated together, providing both protection from a single
network connection failure and increased bandwidth, similar to the benefits of NIC
Teaming. (SMB Direct still works with NIC Teaming, because when a NIC team is
detected, SMB automatically creates four separate connections by default.) The
problem is that RDMA does not work with NIC Teaming. This means that if you wish
to take advantage of SMB Direct (RDMA), which would be the case if you were using
SMB to communicate to the storage of your virtual machines and/or if you are using
SMB Direct for Live Migration (which is possible in Windows Server 2012 R2 and
above), you would not want to lose the RDMA capability if it’s present in your network
adapters. If you wish to leverage RDMA, your converged infrastructure will look
slightly different, as shown in Figure 3.44, which features an additional two NICs that
are not teamed but would instead be aggregated using SMB Multichannel. Notice that
Live Migration, SMB, and Cluster (CSV uses SMB for its communications) all move to
the RDMA adapters, because all of those workloads benefit from RMDA. While this
does mean four network adapters are required to support the various types of traffic
most efficiently, all of those types of traffic are fault-tolerant and have access to
increased bandwidth.

Free download pdf