mirror tier.
2 . Data is rotated to the parity (capacity) tier as required, freeing up space in the
mirror tier. (Note that this data movement bypasses the cache to avoid polluting
the cache with data that does not belong.)
Using this approach provides the capacity benefits of parity while not incurring the
performance penalty associated with it, since the writes are performed to the high-
performance mirror tier and then moved to the parity tier in the background without
impacting workload performance. This is different from the tiering that occurs in
regular Storage Spaces, which is a scheduled operation to move hot and cold data. In
Windows Server 2016 with mixed-resiliency disks, this is an inline tiering operation
happening in real time.
REFS AND WINDOWS SERVER 2016
ReFS was introduced in Windows Server 2012 and was engineered as a new
filesystem that used a completely new way of storing data on disk while exposing
familiar APIs for interaction consistent with NTFS. This allowed ReFS to be used
by workloads without rewriting them. Fundamentally, ReFS focused on
maintaining a high level of data availability and reliability, including self-healing
when used in conjunction with technologies like Storage Spaces, where alternate
copies of data are available and can be used to replace corrupt copies that were
automatically detected. ReFS enabled online backup and repair of its critical
Metadata. It was common to use ReFS for long-term archival data requiring high
levels of resiliency. When using ReFS, you should use it on Storage Spaces for the
best resiliency possible. However, using Storage Spaces does not imply that you
need to use ReFS.
In Windows Server 2012 and Windows Server 2012 R2, ReFS was blocked as a
storage media for Hyper-V because its integrity stream feature (which enabled the
healing ability) caused performance problems with Hyper-V workloads. In
Windows Server 2016, this block has been removed, as the integrity stream
performance has been improved and is disabled by default anyway.
Why use ReFS beyond resiliency? NTFS was designed a long time ago, when disks
were much smaller. It focused all of its effort on blocks on disk, which means that
when you deal with large numbers of blocks, even on superfast storage, the
operations still take a long time. ReFS shifts this focus to thinking about
Metadata manipulation before block manipulation, where possible. For example,
when a new fixed VHD file is created on NTFS, all of the blocks are zeroed on
disk, requiring huge amounts of zeroing I/O. With ReFS, this zeroing I/O is
completely eliminated. There is no loss of security related to reading existing data
on disk. With ReFS, it is not possible to read from noninitialized clusters on disks,
which means that even though the extents backing the file have been
preallocated, they cannot be read from disk unless they have first been written to.