Mastering Windows Server 2016 Hyper-V

(Romina) #1

Performance Tuning and Monitoring with Hyper-V


When an operating system is deployed to a physical machine, it is the only operating
system using that hardware. If there are performance problems, it’s fairly simple to
ascertain the cause by using Task Manager and Performance Monitor. When
virtualization is introduced, looking at performance problems becomes more complex,
because now there is the host operating system (management partition) and all the
various virtual machines. Each virtual machine has its own virtual resources, which
are allocated from the shared resources of the host.


It’s important to be proactive and try to avoid performance problems. You can do that
by being diligent in the discovery of your environment, understanding the true
resource needs of the various workloads running in virtual machines, and allocating
reasonable resources. Don’t give every virtual machine 64 virtual processors, don’t set
every virtual machine’s dynamic memory maximum to 1TB, and do consider using
bandwidth management and storage QoS on virtual machine resources. Resource
leaks can occur, bad code is written, or a user of a virtual machine may perform
“tests.” Any number of problems may cause guest operating systems to consume all of
the available resources, so give virtual machines access only to resources that they
reasonably need based on your discovery and analysis. In most cases, additional
resources can be added painlessly if they are truly required fairly.


Even with all the planning in the world, performance problems will still occur that
you’ll need to troubleshoot. One of the first questions ever asked about using
virtualization is about the “penalty;” that is, what performance drop will I see running
workloads virtualized compared to running them directly on bare metal? There is no
exact number. Clearly, the hypervisor and management partition consumes some
resource, such as memory, processor, storage, and limited network traffic, but there is
also a performance cost in certain areas caused by virtualization, such as additional
storage and network latency, which although very small, does exist (although if this
small additional latency is a problem for the highest-performing workloads, there are
solutions such as SR-IOV to remove network latency and various options for storage,
such as Direct Device Assignment).


Some people will say that for planning purposes, you should consider the worst-case
scenario, and I’ve seen the number 10 percent used commonly (not for Hyper-V
specifically, but for any virtualization solution). When planning out the available
resources, remove 10 percent of the bare-metal server capability, and the 90 percent
that’s left is what you can expect for virtualized workloads. In reality, I’ve never seen
anything close to this. I commonly see workloads running virtualized on Hyper-V that
are on par with a nonvirtualized workload or even exceed the performance you see on
the physical hardware, which at first glance seems impossible. How can virtualization
improve performance above running directly on bare metal? The reason is that some
workloads use only a certain amount of resources efficiently. Once you go beyond a

Free download pdf