as Volume
C:\ClusterStorage\Volume2 for the next, and so on. The content of the disk will be
visible as content within that disk’s Volume folder. Place each virtual machine in its
own folder as a best practice, as shown in Figure 7.27.
Figure 7.27 Viewing cluster shared volumes in Explorer
The ClusterStorage structure is shared, providing a single consistent filename space
to all nodes in the cluster so that every node has the same view. Once a disk is added
to CSV, it is accessible to all nodes at the same time. All nodes can read and write
concurrently to storage that is part of ClusterStorage. Remember that when using
Storage Spaces Direct, any disks created are automatically added as CSVs.
The problem with NTFS being used concurrently by multiple operating system
instances is related to Metadata changes and the chance of corruptions if multiple
operating systems make Metadata changes at the same time. CSV fixes this by having
one node assigned to act as the coordinator node for each specific CSV. This is the
node that has the disk online locally and has complete access to the disk as a locally
mounted device. All of the other nodes do not have the disk mounted but instead
receive a raw sector map of the files of interest to them on each LUN that is part of
CSV, which enables the noncoordinator nodes to perform read and write operations
directly to the disk without actually mounting the NTFS volume. This is known as
Direct I/O.
The mechanism that allowed this Direct I/O in Windows Server 2008 R2 was the CSV
filter (CsvFlt) that was injected into the filesystem stack in all nodes in the cluster
that received the sector map from the coordinator node of each CSV disk and then
used that information to capture operations to the ClusterStorage namespace and
perform the Direct I/O as required. In Windows Server 2012, this changed to the
CSVFS mini filesystem. The CSV technology allows the noncoordinator nodes to