at Network Adapter – Bytes Total/sec, and to see how much each virtual network
adapter for each virtual machine is using, look at Hyper-V Virtual Network Adapter –
Bytes/sec.
Finally, for storage we typically care about the latency of reads and writes, which we
can see for each physical disk (or if SMB/iSCSI, and so on, the equivalent counter) by
looking at PhysicalDisk – Avg. Disk sec/Read and Avg. Disk sec/Write. This should
generally be less than 50 ms. Knowing the queue length can also be useful. You can
see the number of I/Os waiting to be actioned via the Physical Disk – Avg. Disk Read
Queue Length and Avg. Disk Write Queue Length counters; you’ll know that you have
a problem if you see sustained queues.
By looking at all of these performance counters together, it should be possible to
ascertain the cause of any degraded performance on your system. I create a custom
MMC console, add all performance MMC snap-ins, and then add my counters and
save the customized console so that all my counters are easily available, as shown in
Figure 6.12.
Figure 6.12 A nice view of the key resources for my Hyper-V host using the report
display output type
One important point is to benchmark your system. When you first deploy the server,
run the counters to see how the machine runs “new” and store the results.
Performance Monitor is not just for viewing live data; it can also log the data to a file
so that you can save the state of a monitoring session, which is a great feature. Then
after a server has been running for a while, you can run the same counters again to
see how it’s performing and look for any signs of performance degradation.
In Chapter 7, I talk about optimization technologies that use Live Migration to
automatically move virtual machines between nodes in a cluster if the current node
cannot adequately handle the requirements of its virtual machines. This provides
some breathing room regarding exactly estimating the resource needs of every virtual