if it’s attached to every node in the cluster. This is achieved through ClusPort and
ClusBflt, which run on each node in the cluster. ClusBflt acts as a target that provides
access to its local storage via virtual disks and virtual enclosures, while ClusPort acts
as an initiator (a virtual HBA) connecting to the ClusBflt running on each node and
therefore enabling the access to the storage. Through this target-initiator relationship,
every node can access every disk by using SMB 3 as the storage fabric with a new
block-mode transfer feature which is interesting, as SMB is normally a file-based
protocol, although the B in SMB stands for block. Additionally, a fairness algorithm is
utilized to ensure that local application I/O has priority over system requests and that
all nodes receive fair access to disks. What this means in the real-world is that the VM
requests to the storage take precedence over any system activities related to disk
maintenance, such as replication. Note that at times when recovery is required (for
example, when a disk has failed), the SSB will allocate up to 20 percent of resources to
enable forward progress on restoring data resiliency if the VMs are attempting to use
all of the available storage resources. If the VMs are not consuming all resources, the
SSB will use as much resources as possible to restore normal levels of resiliency and
performance. There is also an I/O blender that streamlines I/O streams by
derandomizing random I/Os into a sequential I/O pattern when using HDDs, which
normally have a seek penalty.
All of this happens underneath the existing Storage Spaces SpacePort functionality,
which enables the creation of pools on which the virtual disks are created, then the
filesystem, Cluster Shared Volumes, and so on. The preferred filesystem used is
CSVFS (Cluster Shared Volume File System) on ReFS. It is also possible to use CSVFS
on NTFS; although functionality is lost, it would enable the use of deduplication,
which is not possible on ReFS. The full picture can be seen in Figure 4.8, which
highlights the parts that are regular Storage Spaces and the parts that are specific to
Storage Spaces Direct; they all work together to deliver the complete Storage Spaces
Direct feature. Note that the virtual disk is high up in the stack, which is where
resiliency is configured. Then, based on resiliency requirements, data is stored
multiple times across disks in nodes via ClusPort.