Mastering Windows Server 2016 Hyper-V

(Romina) #1

445 Established Datacenter


SMB DIRECT


While there are other SMB technologies, such as encryption, Receive Side Scaling, VSS
for SMB File Shares, and more, the last feature I want to mention is SMB Direct,
which enables the use of RDMA-capable network adapters with SMB. I discussed
remote direct memory access (RDMA) in Chapter 3, “Virtual Networking” as it relates
to network adapters, and it’s equally important to SMB.


With SMB Direct leveraging the RDMA capability of the network adapter, there is
almost no utilization of server processor resources. The network adapter is essentially
pointed to a block of memory containing the data that needs to be sent to the target,
and then the card takes care of sending it by using the fastest possible speed with very
low latencies. Behind the scenes, the RDMA network adapter may use iWARP, RDMA
over Converged Ethernet (RoCE), or InfiniBand, but that does not matter to the SMB
protocol, which just benefits from the RDMA capability.


There is no special requirement to leverage SMB Direct. Like everything else with
SMB, if the capability exists, it just happens. Initially, a regular SMB connection is
established between the client and server. A list of all possible connections is found,
which enables the use of multichannel, and then the capabilities of the network
adapters are found. If it is found that both the sender and receiver support RDMA,
then an RDMA connection is established and SMB operations switch from TCP to
RDMA, completely transparently.


If you used SMB Direct in Windows Server 2012, you will see a 50 percent
performance improvement using the SMB Direct v2 in Windows Server 2012 R2 for
small I/O workloads, specifically 8KB IOPS, which are common in virtualization
scenarios.


The performance improvement is important because SMB is leveraged for more than
just file operations now. SMB is also used by Live Migration in some configurations,
specifically to take advantage of RDMA-capable NICs. Remember, do not use NIC
Teaming with RDMA-capable network adapters because NIC Teaming blocks the use
of RDMA.


HOW TO LEVERAGE SMB 3 IN YOUR ENVIRONMENT


If right now your datacenter has every virtualization host connected to your top-of-
the-line SAN using Fibre Channel, then most likely SMB 3 will not factor into that
environment today. However, if not every server is connected to the SAN, or you have
new environments such as datacenters or remote locations that don’t have a SAN or
that will have a SAN but you want to try to minimize the fabric costs of Fibre Channel
cards and switches, SMB 3 can help.


If you already have a SAN but do not currently have the infrastructure (for example,
the HBAs) to connect every host to the SAN, then a great option is shown in Figure

Free download pdf