Hyper-V Performance — Introduction

Recently, I put Microsoft’s Hyper-V virtualization technology through its paces, subjecting it to a series of CPU and memory stress tests. These tests are similar to the ones I used to evaluate the performance of VMware ESX and help me understand its impact on Windows guest machine performance. This blog entry kicks off a series that will drill into Hyper-V performance.

Introduction.

Virtualization technology allows a single machine to host multiple copies of an operating system and run these guest virtual machines concurrently. Virtual machines (VMs) reside and run within the physical machine, which is known as the VM Host, and share the processors, memory, disks, network interfaces, and other devices that are installed on the host machine. Virtualization installs a hypervisor to manage the Host machine and distribute its resources among guest virtual machines. A key element of virtualization is exposing “virtualized” interfaces to the physical machine’s processors, memory, disk drives, etc., for guest machines to use as it would ordinarily utilize physical resources. The guest machine executes against virtual processors and acquires guest physical memory that is mapped by the hypervisor to actual machine memory. The hypervisor mediates and controls access to the physical resources of the machine, providing mechanisms for sharing disk devices and network adaptors, while all the time maintaining the illusion created that allows the guest machines to access virtual devices and otherwise function normally. This sharing of the Host computer’s resources by its guest operating systems has profound implications on the performance of the applications running on those guest virtual machines.

Virtualizing the data center infrastructure brings many potential benefits, including enhancing operational resilience and flexibility. Among the many performance issues that arise with virtualization, figuring out how many guest machines you can pack into a virtualization host without overloading it is probably the most complex and difficult one. Dynamic memory management is an important topic to explore because RAM cannot be shared in quite the same fashion as other computer resources. Since the “overhead” of virtualization impacts application responsiveness in a variety of often subtle ways, this chapter will also describe strategies for reducing these performance impacts, while still using virtualization technology effectively to reduce costs.

Microsoft’s approach to virtualization is known as Hyper-V. This chapter describes how Hyper-V works and its major performance options. Using a series of case studies, we will look at the ways Hyper-V impacts the performance of Windows machines and their applications. The discussion will focus on the Hyper-V virtual machine scheduling and dynamic memory management options that are critical to increasing the guest machine density on a virtualization host. While the discussion here refers to specific details of Microsoft’s virtualization product, many of the performance considerations raised apply equally to other virtual machine implementations, including VMware and Zen. Elsewhere, I have blogged about the performance of Windows guest machines running under VMware. (Click here for more on that topic.)

Installation and startup.

To install Hyper-V, you must first install a Windows OS that supports Hyper-V. Typically, this is the Windows Server Core, since installing the GUI is not necessary to administering the machine. (The examples discussed here all involve installation of Windows Server 2012 R2 with the GUI, which includes a Management Console snap-in for defining and managing the guest machines.) After installing the base OS, you then start to configure the machine to be a virtualization Host, adding Hyper-V support by installing that Server role. Following installation, the Hyper-V hypervisor will gain control of the underlying hardware immediately after rebooting.

In Hyper-V, each guest OS is executed inside a partition that the hypervisor allocates and manages. During the boot process, as soon as it is finished initializing, the hypervisor starts the base OS that was originally used to install Hyper-V. The base OS is executed inside a special partition created for it called the Root partition (or the parent partition, depending on which pieces of the documentation you are reading). In Hyper-V, the hypervisor and Hyper-V components installed inside the Root partition work together to deliver the virtualization services that guest machines need.

Once the OS inside the root partition is up and running, if guest machines are defined that are configured to start automatically, the Hyper-V management component inside the root partition will signal Hyper-V to allocate memory for additional child partitions. After allocating the start-up memory assigned it, the hypervisor transfers control to the child partition, beginning by booting the OS installed inside the child partition.

The hypervisor.

The Hyper-V hypervisor is a thin layer of systems software that runs at a special privilege level of the hardware higher than any other code, including code running inside a guest machine. Hyper-V is a hybrid architecture where the hypervisor functions alongside the OS in the parent partition, which is responsible for delivering many of the virtual machine-level administration & housekeeping functions, for example, the ones associated with power management. The parent partition is also instrumental in accessing physical devices like the disks and the network interfaces using the driver stack installed on the parent partition. (We will look at the IO architecture used in Hyper-V in more detail in a moment.)

Hyper-V establishes a Hypercall interface that allows the hypervisor and the partitions it manages to communicate directly. Only the root partition has the authority to issue Hypercalls associated with partition administration, but interfaces exist that provide communication with child partitions as well. To use the Hypercall interface, a guest must first be able to recognize that it is running as a guest machine. A status indicator that a hypervisor is present is available in a Register value returned when a CPUID instruction is issued. CPUID instructions issued by a guest machine are intercepted by Hyper-V, which then returns a set of virtualized values consistent with the hardware image being presented to the guest OS that does not necessarily map to the physical machine. When Hyper-V returns the hardware status requested by the CPUID instruction, it is effectively disclosing its identity to the guest.

Then the guest must have an understanding of the Hypercall interface and when to use it. You can install any guest OS that supports Intel architecture and run it inside a Hyper-V child partition, but only a guest OS that is running some version of Windows understands the Hypercall interface. A guest OS that is shutting down can make a Hypercall to let the hypervisor know that its status is changing, for example. Microsoft calls scenarios where Windows recognizes that it is running as a Hyper-V guest and then calls the hypervisor at the appropriate time using the Hypercall interface a guest OS enlightenment.

There are a number places where Microsoft has added Hyper-V enlightenments to Windows, very few of which are publicly documented. Some documentation is available on the Hypercall interface, but, generally, the topic is not considered something that 3rd party developers of device drivers, on behalf of whom much of the Windows OS kernel documentation exists, need to be concerned with. The standard disk and network interface device drivers installed in guest machines are enlightened versions developed by Microsoft for use with Hyper-V guests, while the root partition runs native Windows device driver software, installed in the customary fashion. Devices installed on the Hyper-V host machine that provide a pass-through capability allowing the guest OS to issue the equivalent of native IO are highly desirable for performance reasons, though, but they don’t need to make Hypercalls either. They can interface directly with the guest OS.

Virtualization hardware.
The Hyper-V approach to virtualization is an example of software partitioning, which contrasts with partitioning mechanisms that are sometimes built into very high end hardware. (IBM mainframes, for example, use a hardware partitioning facility known as PR/SM, but also support installing an OS with software partitioning capabilities in one or more of the PR/SM partitions.)  Make no mistake, though, the performance of guest machines running under Hyper-V is greatly enhanced by hardware facilities that Intel and AMD have added to the high-end chips destined for server machines to support virtualization. In fact, for performance reasons, Hyper-V cannot be installed on Windows Server machines without Intel VT or AMD-V hardware support. (You can, however, install Client Hyper-V on those machines under Windows 8.x or Windows 10. One important use case for Client Hyper-V is it makes it easier for developers to test or demo software under different versions of Windows.)

The hardware extensions from AMD and Intel to support virtualization include special partition management instructions that can only be executed by a hypervisor running at the highest privilege level (below Ring 0 on the Intel architecture, the privilege level where OS code inside the guest partitions run). Certain virtual machine management instructions can only be issued by software that is running at the privilege level associated with the hypervisor, something which is necessary to guarantee the security and integrity of the virtualized environments. For example, hypervisor instructions include facilities to dispatch a virtual machine so that its threads can execute, a function performed by the hypervisor Scheduler in Hyper-V.

The hardware’s virtualization extensions also include support for managing two levels of the Page Tables used in mapping virtual memory addresses, one set maintained by the hypervisor and another set maintained by the guest OS. This virtual memory facility, known generically as Second Level Address Translation (SLAT), is crucial for performance reasons. The SLAT implementations differ slightly between the hardware manufacturers, but functionally they are quite similar. Prior to support for SLAT, a hypervisor was forced to maintain a set of shadow page tables for each guest, with the result that virtual memory management was much more expensive.

Another set of virtualization hardware enhancements was required in order to support direct attachment of IO devices to a virtual machine. Attached peripherals use DMA (Direct Memory Access), which means they reference machine memory addresses where IO buffers are located, allocated in Windows from the nonpaged pool. Guest machines do not have access to machine memory addresses, and the hardware APIC they interface with to manage device interrupts is also virtualized. Direct IO requires hardware support for remapping Guest Physical Memory DMA addresses and remapping Interrupt vectors, among other related extensions. Without direct-attached devices, the performance of I/O bound workloads suffers greatly under all virtualization schemes.

Guest OS Enlightenments.

In principle, each guest virtual machine can operate ignorant of the fact that it is not communicating directly with physical devices, which instead are being emulated by the virtualization software. In practice, however, there are a number of areas where the performance of a Windows guest can be enhanced significantly when the guest OS is aware that it is running virtualized and executes a different set of code than it would otherwise. (Paravirtualization is the term used in the computer science literature to describe the situation where the guest OS is modified to run virtualized. Hyper-V is definitely an example of paravirtualization, as is VMware ESX.) Consider I/O operations directed to the synthetic disk and network devices the guest machine sees. Without an enlightenment, an I/O interrupt is destined to be processed twice, once in the Root partition, and again inside the guest OS driver stack. The synthetic disk and network device drivers Microsoft supplies that run inside a Windows guest machine understand that the device is synthetic, the interrupt has already been processed in the root partition, and can exploit that fact.

Microsoft calls Hyper-V-aware performance optimizations running in the guest OS enlightenments, and they are built into guest machines that run a Microsoft OS. For example, with the synthetic devices that the guest machine sees, the device driver software is built to run exclusively under Hyper-V. For example, these device drivers contains an enlightenment that bypasses some of the code that is executed on the Windows guest following receipt of a TCP/IP packet because there is no need for the TCP/IP integrity checking to be repeated inside the guest when the packet has already passed muster inside the Root partition. Since TCP/IP consumes a significant amount of CPU time when processing each packets that is received, this enlightenment plays an important role ensuring that network I/O performance does not run significantly degraded under Hyper-V. (Note that VMware does something very similar, so Windows I/O can be considered “enlightened” under VMware, too.)

Another example of a Hyper-V enlightenment is a modification to the OS Scheduler that makes a Hypercall on a context switch to avoid a performance penalty that the guest machine would incur otherwise. This type of enlightenment require s both that the guest OS understand that it is running inside a Hyper-V partition and that a particular Hypercall service exists that should be invoked in that specific context. The effect of this type of enlightenment is, of course, to make a Windows guest VM perform better under Hyper-V. These guest OS enlightenments suggest that the same Windows guest will not perform as well under a competitor’s flavor of virtualization where Windows is denied this insider insight.

Another enlightenment limits the number of timer interrupts that a Windows guest normally generates 256 times each second that are used by the OS Scheduler. Since Timers are virtualized under Hyper-V, timer interrupts are more expensive generally, but the main reason for the timer enlightenment is stop an idle guest from issuing timer interrupts periodically simply to maintain its CPU accounting counters, a legacy feature of the Windows OS that impacts power management and wastes CPU cycles. Without these periodic interrupts, this enlightenment also means that guest Windows machines no longer have a reliable CPU accounting mechanism under Hyper-V. The hypervisor maintains its own set of logical processor accounting counters which need to be used instead.

Another one of the known guest machine enlightenments notifies the hypervisor when either a Windows kernel thread or device driver software is executing spinlock code on a multiprocessor, which means it is waiting for a thread executing on another virtual processor to release a lock. Spinlocks are used in the kernel to serialize access to critical sections of code, with the caveat that they should only be used when there is a reasonable expectation that the locked resource will be accessed quickly and then freed up immediately. What can happen under Hyper-V is that the spinlock code is executing on one of the machine’s processors, while the thread holding the lock is blocked because the virtual processor it is running on is blocked from executing by the hypervisor scheduler. Upon receiving a long spinlock notification, the Hyper-V Scheduler can then make sure the guest machine has access to another logical processor where the guest machine code that is holding the resource (hopefully) can execute.

This enlightenment allows the Hyper-V Scheduler to dispatch a multiprocessor guest machine without immediately backing each of its virtual processors with an actual hardware logical processor. Note, however, this enlightenment does not help application-level locking avoid similar problems since native C++ and .NET Framework programs are discouraged from using spinlocks.

Performance expectations.

Before drilling deeper into the Hyper-V architecture in the next post and discussing the performance impacts of virtualization in some detail, it will help to put the performance implications of virtualization technology in perspective. Let’s begin with some very basic performance expectations.

Of the many factors that persuade IT organizations to configure Windows to run as a virtual machine guest under either VMware or Microsoft’s Hyper-V, very few pertain to performance. To be sure, many of the activities virtualization system administrators perform are associated with the scalability of an application, specifically, provisioning a clustered application tier so that a sufficient number of guest machine images are active concurrently. Of course, the Hyper-V Host machine and the shared storage and networking infrastructure that is used must be adequately provisioned to support the planned number of guests. Provisioning the Hyper-V Host machine also extends to ensuring that the guest machines configured to run on the host machine don’t overload it. Finally, virtualization administrators must ensure that guest machines are configured properly to utilize the physical resources that are available on the host machine. Under Hyper-V, this entails making sure an adequate number of virtual processors are available to the guest machine, as well as an appropriate machine memory footprint. This aspect of provisioning guest machines properly is, perhaps, more aptly described as capacity planning. It is certainly critical to the performance of the applications executing inside those guest machines that they are configured properly and the Hyper-V host machine where they reside is adequately provisioned.

Instead of under-provisioning, the IT organization frequently employs virtualization to utilize server hardware that is often massively over-provisioned with respect to individual guest machine workloads. Virtualization allows IT to consolidate multiple servers and workstations on a single piece of high-end hardware, with the object of making significantly more efficient and more effective use of that hardware. Data center server hardware that is provisioned with high end network adaptors and connections to a high speed SAN disks (either flash memory or conventional drives) is expensive on a grand scale, so the cost benefit of packing guest machines more tightly into a physical footprint is considerable. Meanwhile, there is always the temptation to overload the VM Host, which runs the risk that the virtualization host machine could become under-provisioned. Under-provisioning has serious implications for the virtualization environment. Whenever the Hyper-V Host machine is under-provisioned for the workload presented by the guest machines it is hosting, any performance problems that result are only too likely to impact all of the guest machines that are resident on that host machine.

The best way to think about the performance of applications running under virtualization is that you have traded off some, hopefully, minimal amount of performance for the benefit of utilizing data center hardware more efficiently. Compared to a native machine running Windows, you can count on the fact that virtualization will always have some impact on the performance of your applications running inside a Windows virtual machine (VM) guest. The impact may in fact be minor, in extreme cases, barely detectable, when the VM Host machine is over-provisioned with respect to the physical resources that exist in the Host machine and are available to be granted to the guest VM. On the other hand, when the guest machine does not have access to sufficient resources, the performance impact of virtualization can be severe.

In this series of blog posts, I look at specific examples of this trade-off by comparing the performance of a benchmarking application

  1. on a native machine,
  2. on an amply provisioned Hyper-V Host, and
  3. on a Hyper-V Host machine is significantly under-provisioned, relative to the guest machine workloads it is configured to support

The key skill for performance analysts that needs cultivating is the ability to recognize an under-provisioned Hyper-V Host, or, better yet, proactively anticipate the problem and re-balance and redistribute the workload better to avoid future performance problems.

Given the capacity of today’s data center hardware, under-provisioning is often not a big concern. But when performance problems due to lack of capacity on the physical machine do occur, diagnosing and fixing them is challenging work. Typically, diagnosing this type of performance problem requires visibility into both the Hyper-V performance counters and performance measurements from the guest machines. When there is contention for storage or network resources, you may also need access to performance data from those subsystems, many of which are shared across multiple Hyper-V or VMware Hosts.

Figuring out why a guest machine does not have access to adequate resources in a virtualized environment can also be quite difficult. Some of this difficulty is due to sheer complexity: it is not unusual for the Host to be executing a dozen or more guest machines, any one of which might be contributing to overloading a shared resource, e.g., a disk array attached to the Host machine or one of its network interface cards. The guest VM might also be facing a constraint where the configuration of the virtual machine restricts its access to the physical processor and memory resources it requires. Another source of difficulty is that the virtualization environment distorts the measurements that are produced from inside the guest machine that, under normal circumstances, would be used to understand the hardware requirements of the workload.

Fortunately, the configuration flexibility that is one of the main benefits of virtualization technology also often provides the means to deal rapidly with many performance problems that arise when guest machines are configuration-constrained or otherwise severely under-provisioned. With virtualization, you have the ability to spin up a new guest machine quickly and then add it non-disruptively to an existing cluster of front-end web servers, for example. Or you might be able to use live migration to relieve a capacity constraint in the configuration by moving one or more workloads to a different Hyper-V Host machine..

Tagged , . Bookmark the permalink.

One Response to Hyper-V Performance — Introduction

  1. Pingback: Microsoft's Hyper-V Performance — Introduction | Demand Technology Software

Leave a Reply

Your email address will not be published. Required fields are marked *