and offer that to clients, the scalability would be limited to the capacity of that single
server. Furthermore, if that single server was unavailable for planned or unplanned
reasons, the service offered would be unavailable. Therefore, a load balancer is always
recommended to act as the entry point for client requests. (A client is something that
is using the service—for example, a user on a web browser out on the Internet
browsing to your site.) The load balancer can then distribute the requests to servers
that are part of the load-balanced set and perform probes to check whether nodes are
available and ensure that client requests are not sent to servers that are not available.
Windows Server has long had a feature called Network Load Balancing (NLB). NLB
can be enabled on all of the servers that offer the same service (known as the backend,
as it’s behind the load balancer)—for example, a farm of IIS servers hosting the same
page. The NLB instance would have its own IP address, and the virtual IP (VIP) and
each machine in the NLB cluster would respond to that VIP, distributing traffic
between them. Clients would access services through the NLB VIP (the frontend, as
it’s in front of the load balancer) and be serviced by one of the NLB cluster nodes.
While the NLB can scale up to 32 nodes, its software runs on the actual nodes offering
the backend services and therefore adds overhead to the workloads. In addition, any
management requires direct communication to the nodes providing the services. In
most large deployments, NLB is not used, and instead dedicated load balancer
appliances are leveraged.
Azure Resource Manager has its own software load balancer (SLB) that runs as its
own set of resources, can be used in a multitenant configuration, is highly scalable,
and is optimized to maximum throughput. This SLB is provided as a virtual function
in Windows Server 2016 as part of SDNv2 and managed through the Network
Controller. However, if you have done any research, you will frequently hear of a
component called the SLB MUX rather than just SLB, so what is SLB MUX? Figure
3.34 shows the SLB implementation architecture.