or pass-through disks will not benefit from storage resiliency.
There is another feature that was mentioned previously in the new node states:
Quarantine. It is not desirable to have a node constantly falling in and out of cluster
membership, which is likely caused by some underlying configuration or hardware
issues. This “flapping” of membership can cause resources to move constantly
between nodes, resulting in service interruption. Thus Windows Server 2016
introduces the Quarantine logic to stop this from happening. Quarantine is triggered
for a node if the node ungracefully leaves the cluster three times within an hour. By
default, that node will stay in Quarantine state for 2 hours, during which time it is not
allowed to rejoin the cluster nor host cluster resources. Once the 2 hours has passed,
the node can rejoin, as it is assumed during that 2-hour window that actions would be
taken to resolve the underlying cause of the flapping. Clustering will not allow more
than 25 percent of nodes to go into a quarantine state at any one time. When a node is
quarantined, any VMs are gracefully drained to other nodes without interrupting the
actual workloads. It is possible to customize the quarantine behavior through settings
that are configured at a cluster level; that is, with (Get-Cluster).
QuarantineThreshold The number of failures within the hour before
quarantine (3 by default).
QuarantineDuration The number of seconds a node stays in quarantine (7,200
by default, which is 2 hours). Note that if you set the value to 0xFFFFFFF, the
node will never automatically leave quarantine and will stay in that state until
manually brought online.
To force a node that is currently quarantined to rejoin the cluster, use the Start-
ClusterNode cmdlet with the -ClearQuarantine flag; for example:
Start-ClusterNode -Name Node3 -ClearQuarantine
CLUSTER VIRTUAL NETWORK ADAPTER
When talking about the cluster network, it’s interesting to look at how the cluster
network functions. Behind the scenes, a Failover Cluster Virtual Adapter is
implemented by a NetFT.sys driver, which is why it’s common to see the cluster
virtual adapter referred to as NetFT. The role of the NetFT is to build fault-tolerant
TCP connections across all available interfaces between nodes in the cluster, almost
like a mini NIC Teaming implementation. This enables seamless transitions between
physical adapters in the event of a network adapter or network failure.
The NetFT virtual adapter is a visible virtual device. In Device Manager, the adapter
can be seen if you enable viewing of hidden devices. You also can use the ipconfig
/all command, as shown here:
Tunnel adapter Local Area Connection* 11:
Connection-specific DNS Suffix .:
Description .. . . .. . . . . .: Microsoft Failover Cluster Virtual Adapter