gateway, a GRE tunnel is established to an endpoint somewhere else in your
datacenter, and a specific tenant’s traffic is sent across. This is similar to L3
forwarding, whereby traffic is forwarded through a tunnel; except with GRE
tunneling, separate VLANs are not required, and the GRE tunnel is established
over the shared network infrastructure. Each tenant uses its own GRE tunnel
instance, maintaining the tenant’s isolated address space across that
infrastructure. For example, you might have a red network GRE tunnel and a blue
network GRE tunnel. Consider a hosting provider that currently has a tenant with
dedicated hardware in a certain address space; that same tenant now wants to have
resources as part of a multitenant cloud that is in the same address space and
connected. If you use L3 forwarding, you’ll require separate VLANs for the
customers and separate routing infrastructure. With a GRE tunnel, the isolated
address space is maintained between the endpoints, with no dedicated VLAN
required. This same type of tunnel can also be used to connect to MPLS circuits.
Another location using a site-to-site VPN tunnel that works with the SLB to
frontend the gateway instances VIP. By integrating with SLB, easy scaling on the
backend can be enabled without requiring any changes to the other end of the site-
to-site VPN connection.
A big change from the solution in Windows Server 2012 R2 to the multitenant
gateway in Windows Server 2016 is in the scaling and high availability of the gateway.
In Windows Server 2012 R2, you could have an active gateway and then one passive
gateway. Only one instance was performing work, while the other sat idle. If you
reached the capacity of the active node, you had to add a new pair of gateways that
would have their own VIP, and a tenant could use only one gateway at a time.
In Windows Server 2016, an M:N redundancy is used. M is the number of active
gateways, highlighting the fact that there can be more than one. N is the number of
passive gateways available in case an active gateway fails. For example, you can have
four active and two passive gateways, or any other combination. Like the SLB MUX, a
gateway is a VM, and multiple gateways are added to a gateway pool to provide the
multitenant gateway services. A gateway pool can contain gateways of all different
types or only certain types of gateways. It is really a mechanism to control resource
utilization and capacity if needed. A hoster may have different pools with different
levels of bandwidth, so it can charge tenants for different levels of connectivity by
moving them between gateway pools. The Network Controller is responsible for
checking the availability of gateway instances. If an active gateway becomes
unavailable, the Network Controller can move any connections from the downed node
to a passive node. There is no Failover Clustering running within the VMs; they are
stand-alone instances, which makes the deployment simpler than for their 2012 R2
equivalent.
If you deployed the Windows Server 2012 R2 gateway, you will remember that the
gateway VMs had to be on a separate set of Hyper-V hosts from those hosting actual
VMs that participated in a virtual network. This limitation is removed in the SDNv2