Mastering Windows Server 2016 Hyper-V

(Romina) #1
machine
Active virtual machines
per host

384 1,024 2.5x

Maximum cluster nodes 16 64 4x
Maximum cluster
virtual machines

1,000 8,000 8x

Maximum VHD size 2TB 64TB   (with   VHDX) 32x

Some of the new scalability limits may seem ridiculously large: 64TB virtual hard
disks, 1TB of memory in a single VM, and even 64 vCPUs in a single VM. But the point
now is that almost any workload can be virtualized with Windows Server 2012 Hyper-
V. To illustrate this capability to virtualize almost any workload, Microsoft released a
statement that more than 99 percent of the world’s SQL Server deployments could
now run on Windows Server 2012 Hyper-V. One aspect that is important to the 64TB
VHDX scalability is that it removes most scenarios of having to use pass-through
storage, which maps a virtual machine directly to raw storage. The goal of
virtualization is to abstract the virtual machine environment from the physical
hardware. Directly mapping a virtual machine to physical storage breaks this
abstraction and stops some features of Hyper-V from being used, such as checkpoints,
Live Migration, and Hyper-V Replica. In all my years of consulting, I have never seen
an NTFS volume 64TB in size. In fact, the biggest I have heard of is 14TB, but a 64TB
limit means that VHDX scalability would not limit the storage workloads that could be
virtualized.


WHY MOST    VOLUMES ARE LESS    THAN    2TB
In most environments, it’s fairly uncommon to see NTFS volumes greater than
2TB. One reason is that master boot record (MBR) partitioning had a limit of 2TB.
The newer GUID Partition Table (GPT) removed this limitation, but volumes still
stayed at around 2TB. Another reason concerns the unit of recoverability. Any set
of data is typically restricted to the amount of data that can be restored in the
required time frame. Legacy backup/restore solutions that were tape based could
limit how large data sets would be, but modern backup/restore solutions that are
primarily disk-based remove this type of limit.
The number one reason for limits on volumes is a corruption occurring on the
NTFS volume. If a corruption occurs, the ChkDsk process must be run, which
takes the volume offline while the entire disk is scanned and problems are
repaired. Depending on the disk subsystem and its size, this process could take
hours or even days. The larger the volume, the longer ChkDsk will take to run and
the longer the volume would be offline. Companies would limit the size of
volumes to minimize the potential time a volume would be offline if ChkDsk had
to be run. In Windows Server 2012, ChkDsk has been rearchitected to no longer
take the volume offline during the search for errors. Instead, it has to take the
Free download pdf