What is Quorum in Windows Server Failover Cluster (WSFC)
Blog
What is Quorum in Windows Server Failover Cluster (WSFC)
In WSFC (Windows Server Failover Cluster), quorum is the voting mechanism that decides whether the cluster is allowed to keep running.
Only the side that holds a majority of votes (from nodes and/or a witness) continues to operate; the side that does not meet the majority will stop. This prevents data corruption caused by split-brain (two partitions both thinking they are the “real” cluster).
Reference:What is a failover cluster quorum witness in Windows Server? | Microsoft Learn
Why do we need quorum?
To guarantee that only one side of a partitioned cluster keeps running during failures or network splits.
To avoid split-brain: the partition that cannot meet the majority shuts down, protecting data consistency.
Availability is determined not by “how many servers” you have, but by whether you hold a majority of votes.
Example 1: 3 votes (3 nodes) → Required = 2. You can lose one node and keep running.
Example 2: 2 nodes + 1 witness (total 3 votes) → Required = 2. If one node fails, the surviving node + witness (2 votes) keep the cluster online.
Example 3: 2 nodes only (total 2 votes) → Required = 2. If one node fails, only 1 vote remains and the cluster stops. A witness is effectively required.
Terminology
Node vote: Each node has one vote (subject to dynamic adjustments).
Witness: Provides an extra vote (Disk / File Share / Cloud).
Dynamic Quorum / Dynamic Witness: Automatically adjust votes to help maintain majority during failures.
Note: A “majority” means strictly more than half of the total votes.
Quorum Configuration Types
Node Majority
Each node has one vote; odd-number node clusters are simple and robust.
Example: 3 nodes → Required = 2; one node can fail and the cluster stays online.
Node and Disk Majority
Each node has a vote plus a shared disk witness adds one vote.
Typical in even-node + shared storage scenarios (e.g., FCI).
Node and File Share Majority
Each node has a vote plus a File Share Witness (SMB share) adds one vote.
Useful when there is no shared storage or when sites are stretched; ideal to place in a third location.
Cloud Witness
Uses Azure Storage as a witness (Windows Server 2016+).
No physical third site required; lightweight to manage and cost-effective.
Why a two-node cluster does not always stop when one node reboots
With a witness (FSW/Cloud/Disk): The surviving node + witness = 2/3 votes → the cluster keeps running.
Dynamic Quorum (enabled by default): If nodes go down sequentially, voting can be auto-rebalanced so the remaining node can still maintain the majority (“last man standing” behavior).
However, in a simultaneous network split where the two nodes cannot see each other, each side holds 1/2 and neither has a majority → the cluster stops. Placing the witness in a third site is critical.
Tip: For planned maintenance, pause and drain roles first (Pause-ClusterNode -Drain), then reboot, and resume afterward. This minimizes failovers and keeps services stable.
Dynamic Quorum and NodeWeight
NodeWeight is a flag indicating whether a node has a vote (1) or not (0).
Dynamic Quorum is the mechanism that automatically adjusts NodeWeight to preserve majority when nodes are lost and later rejoin.
When Dynamic Quorum is enabled, the cluster may reassign votes during sequential failures so the remaining side can still meet the majority.
Administrators can manually toggle NodeWeight, but in most environments it’s safest to leave Dynamic features enabled and avoid manual micromanagement.
# Current quorum mode and witness
Get-ClusterQuorum
# Dynamic Quorum / Dynamic Witness state
Get-Cluster | Format-List *Dynamic*
# Node voting state (NodeWeight = 1/0)
Get-ClusterNode | Format-Table NodeName, State, DynamicWeight, NodeWeight
Note: “Dynamic Quorum” = “automatic control of who has a vote.” “NodeWeight” = “per-node vote on/off.” They are closely related.