Understanding Guest Machine Performance under Hyper-V

In this post I begin to consider guest machine performance under Hyper-V. This is the seventh post in a series on Hyper-V performance. The series began here.

All virtualization technology requires executing additional layers of systems software that adds overheads in many functional areas of Windows, including

  • processor scheduling,
  • intercepting and emulating certain guest machine instructions that would violate the integrity of the virtualization scheme,
  • machine memory management,
  • initiating and completing IO operations, and
  • synthetic device interrupt handling.

The effect of virtualization in each one of these areas of execution is to impart a performance penalty, and this applies equally to VMware, Zen, and other flavors of virtualization that are available. Windows guest machine enlightenments under Hyper-V serve to reduce some of the performance penalties associated with virtualization, but they cannot eliminate all of it. Your application suffers some performance penalty when it executes on a virtual machine. The question is how big is the performance penalty.

Executing these additional layers of software under virtualization always impacts the performance of a Windows application negatively, particularly its responsiveness. Individually, executing these extra layers of software adds a very small amount of overhead every time one of these functional areas is exercised. Added together, however, these additional overhead factors are significant enough to take notice of. But the real question is whether they are substantial enough to actively discourage data centers from adopting virtualization technology, given its benefits in many operational areas. Earlier in this series, I suggested a preliminary answer, which is “No, in many cases the operational benefits of virtualization substantially outweigh the performance risks.” Still, there are many machines that remain better off being configured to run on native hardware. Whenever maximum responsiveness and/or throughput is required, native Windows machines reliably outperform Windows guest machines executing the same workload.

Where Hyper-V virtualization technology excels is in partitioning and distributing hardware resources across virtual machines require far less capacity than is available on powerful server machines. Furthermore, by exploiting the ability to clone new guest machines rapidly, virtualization technology is often used to enhance the scalability and performance of an application that requires a cluster of Windows machines to process. Virtualization can make scaling up and scaling out such an application operationally easier. However, you should be aware that there are other ways to cluster machines to achieve the same scaling up and scaling out improvements without incurring the overhead of virtualization.

Performance risks.

The configuration flexibility that virtualization provides is accompanied by a set of risk factors that expose virtual machines to potential performance problems that are much more serious in nature than the additional overhead considerations discussed immediately above. These performance risks need to be understood by IT professionals charged with managing the data center infrastructure. The most serious risk that you will encounter is the ever-present danger of over-loading the Hyper-V Host machine, which leads to more serious performance degradation than any of the virtualization “overheads” enumerated above. Shared processors, shared memory and shared devices introduce opportunities for contention for those physical resources among guest machines that would not otherwise be sharing those components if allowed to run on native hardware. The added complexity of administering the virtualization infrastructure with its more ubiquitous level of resource sharing is a related risk factor.

When a Hyper-V Host machine is overloaded, or over-committed, all its resident guest machines are apt to suffer, but isolating them so they share fewer resources, particularly disk drives and network adaptors, certainly helps. However, shared CPUs and shared memory are inherent in virtualization, so achieving the same degree of isolation with regard to those resources is more difficult, to say the least. This aspect of resource sharing is the reason Hyper-V has virtual processor scheduling and dynamic memory management priority settings, and we will need to understand when to use these settings and how effective they are. In general, priority schemes are only useful when a resource is over-committed, essentially an out-of-capacity situation. This creates a backlog of work – a work queue – that is not getting done. Priority sorts the work queue, allowing more of the higher priority work to get done, at the expense of lower priority workloads. Like any other out-of-capacity situation, the ultimate remedy is not priority, but finding a way to relieve the capacity constraint. With a properly provisioned virtualization infrastructure, there should be a way to move guest machines from an over-committed VM Host to one that has spare capacity.

Somewhere between over-provisioned and under-provisioned is the range where the Hyper-V Host is efficiently provisioned to support the guest machine workloads it is configured to run. Finding that balance can be difficult, given constant change in the requirements of the various guest machines.

Finally, there are also performance risks associated with guest machine under-provisioning, where the VM Host machine has ample capacity, but one or more child partitions is constrained by its virtual machine settings from accessing enough of the Hyper-V Host machine’s processor and memory resources it requires.

Table 2 summarizes the four kinds of Hyper-V configurations that need to be understood from a cost/performance perspective, focusing on the major performance penalties that can occur.

Table 2. Performance consequences of over or under-provisioning the VM Host and its resident guest machines.

Condition

Who suffers a performance penalty

Over-committed VM Host All resident guest machines suffer
Efficiently provisioned VM Host No resident guest machines suffer
Over-provisioned VM Host No guest machines suffer, but hardware cost is higher than necessary
Under-provisioned Guest Guest machine suffers

In the next blog entry, I will make an effort to characterize the performance profile of each configuration condition, beginning with the case that generates the least damaging performance penalty, namely that of the over-provisioned VM Host. Characterizing application performance when the Hyper-V Host machine is over-provisioned will provide insight into the minimum performance penalties that you can expect to accrue under virtualization..

Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *