decide whether the witness should have a vote or not:
If an even number of nodes have a vote (dynamic weight = 1), the witness dynamic
vote = 1.
If an odd number of nodes have a vote (dynamic weight = 1), the witness dynamic
vote = 0.
This is logical because the witness is needed only when there is an even number of
nodes, which ordinarily would not be able to make quorum in the event of a split. If
the witness goes offline or fails, its witness dynamic vote value will be set to 0 in the
same manner a failed nodes vote is removed. To check whether the witness currently
has a vote, run the following PowerShell command:
(Get-Cluster).WitnessDynamicWeight
A return value of 1 means that the witness has a vote; a return value of 0 means that
the witness does not have a vote. If you look at the nodes in the cluster, the witness
vote weight should correlate to the dynamic votes of the cluster nodes. To check the
dynamic votes of the cluster nodes from PowerShell, use the following:
PS C:> Get‐ClusterNode | ft Name, DynamicWeight ‐AutoSize
Name DynamicWeight
—— ——————-
savdalhv20 1
savdalhv21 1
Advanced Quorum Options and Forcing Quorums
In all of the quorum explanations so far, the critical factor is that a majority of votes
must be available for the cluster to keep running; that is, greater than 50 percent. At
times an even number of votes will be in the cluster due to other failures (although
dynamic witness should help avoid ever having an even number of votes unless it’s
the witness that has failed) or misconfiguration. Windows Server 2012 R2 and above
provide tie-breaker code so that the cluster can now survive a simultaneous loss of 50
percent of the votes while ensuring that only one partition keeps running and the
other partition shuts down. In the event of the loss of 50 percent of the votes,
clustering will automatically select one of the partitions to “win” by using a specific
algorithm.
The way the winning partition is selected is as follows: If an even number of node
votes are in the cluster, the clustering service randomly selects a node and removes its
vote. That changes the number of votes in the cluster to odd again, giving one of the
sites a majority vote and therefore making it capable of surviving a break in
communication. If you want to control which site should win if a break in
communication occurs, you can set the cluster attribute LowerQuorumPriorityNodeId
to the ID of the node that should lose its vote when you have an even number of
nodes and no witness available. Remember, provided you have configured a witness,
this functionality should not be required.