Mastering Windows Server 2016 Hyper-V

(Romina) #1

Figure 3.44 A converged Hyper-V host configuration with separate NICs for SMB
(RDMA) traffic


In some environments, requiring two extra RDMA network adapters and using up two
extra RDMA ports on a switch for each host may be OK, but that’s a pretty steep bill,
just to be able to leverage RDMA because it does not work with NIC Teaming.
Windows Server 2016 solves this problem with Converged NIC and Switch Embedded
Teaming. With the Converged NIC, Data Center Bridging (DCB) is utilized to assign
traffic classes to the different types of traffic traversing the NIC. For example, RDMA
uses one traffic class, while regular vSwitch traffic uses another class. This enables the
SDNv2 QoS still to correctly manage the bandwidth assigned to different types of
traffic. However, even with the Converged NIC, you still need the ability to group
multiple NICs together for scale and availability, and NIC Teaming is not compatible
with the VFP extension.


Switch Embedded Teaming (SET) provides an integrated teaming solution direction in
the VMSwitch, allowing up to 8 network adapters to be teamed within the switch.
While 8 may seem low compared to the 32 supported by LBFO NIC Teaming, SET is
focused around SDNv2 and modern network adapters that are 10Gbps, 40Gbps, and
higher. It is unusual to see any server with more than 8 of those tiers of network
adapters. SET requires that all adapters are identical, including the make, model,
firmware, and driver. SET utilizes switch-independent teaming only with no LACP
support and supports the Dynamic and Hyper-V Port modes of load distribution.
Active/Passive teaming is not supported. Most important, it is RDMA/DCB aware,
which enables a shared set of NICs to be utilized for RDMA capabilities and as part of
the VMSwitch. This is shown in Figure 3.45.

Free download pdf