Mastering Windows Server 2016 Hyper-V

(Romina) #1

packet from the VIP the client used, to the DIP of the backend server that will process
the client request. However, this does not occur in the SLB MUX. Instead, the
processing is split between the MUX and the VMSwitch, which greatly increases scale
and efficiency. In Figure 3.34, I have placed numbers next to the flow of a request
from a client along with the locations where the packet uses VIP or DIP. I walk
through this in the following list, but please refer to the figure as well:


1 . The client  sends   a   request to  the VIP of  the service being   offered.
2 . An edge device receives the request to the VIP. Through BGP, it has a routing table
and knows that VIP is available on either of the two MUXs. Using ECMP, it picks
one and sends the packet to a MUX still addressed to the VIP.
3 . The MUX receives the packet and picks a particular DIP to handle the request,
which is part of a virtual network, so it encapsulates the packet using VXLAN and
sends it to the host that is running the VM with the specific DIP.
4 . The VMSwitch (and specifically the VFP) receives the packet, removes the
encapsulation, and performs the NAT to rewrite the destination as the DIP instead
of the VIP and forwards to the VM. Note that within the VMSwitch, the probing of
the availability of VMs that are part of the backend set is performed and
communicated via the SLB host agent, which is installed on all Hyper-V hosts to
the SLB MUXs. This is normally directly performed by the load balancer in most
implementations, but by leveraging the VMSwitch, this removes workload from
the MUX and improves the scalability.
5 . The VM processes the request and sends a response. Because the packet was
rewritten with its DIP as the destination, it knows nothing about the NAT, and so
writes its DIP as the source of the packet.
6 . Ordinarily, this response would then be sent back to the load-balancer appliance to
forward on, but that is not the case with the SDNv2 SLB. The VMSwitch performs
NAT to rewrite the packet so that the source is the VIP and not the DIP of the VM,
and the requesting client does not see a different IP address responding from the
one it sent to. Then the VMSwitch bypasses the MUX completely and sends the
packet directly to the edge router over the wire. This greatly reduces the load on
the MUXs, meaning fewer MUXs are required in any solution. This is known as
Direct Server Return (DSR).
7 . The edge device forwards the packet to the originating client.

Although this example shows the SLB being used by an external party, the SLB can
also be used as an internal load balancer and works in exactly the same way. The only
difference is the frontend IP configuration, with the VIP being an externally or
internally facing IP address. Another optimization happens for east-west use of the
SLB—that is, an internal load balancer providing services between VMs in a virtual
network. In this scenario, only the first packet to an internal load balancer is sent via
the MUX. After the first packet is sent and the destination Hyper-V host identified
that is hosting the VM with the chosen target DIP, a redirect packet is sent from the

Free download pdf