Hyper-V architecture: Memory Ballooning

This is the fifth post in a series on Hyper-V performance. The series began here.

Ballooning

Removing memory from a guest machine while it is running is a bit more complicated than adding memory to it, which makes use of a hardware interface that the Windows OS supports. One factor that makes removing memory from a guest machine difficult is that the Hyper-V hypervisor does not gather the kind of memory usage data that would enable it to select guest machine pages that are good candidates for removal. The hypervisor’s virtual memory capabilities are limited to maintaining the second level page tables needed to translate Guest Virtual addresses to valid machine memory addresses. Because the hypervisor does not maintain any memory usage information that could be used, for example, to identify which of a guest machine’s physical memory pages have been accessed recently, when Guest Physical memory needs to be removed from a partition, it uses ballooning, which transfers the decision about which pages to remove from memory to the guest machine OS, which can execute its normal page replacement policy.

Ballooning was pioneered in VMware ESX, first discussed publicly in a paper by Carl Waldpurger entitled “Memory Resource Management in VMware ESX Server,” published in Dec. 2002. See Proc. Fifth Symposium on Operating Systems Design and Implementation (OSDI ’02). The Hyper-V implementation is similar, but with some key differences. One key difference is that the Hyper-V hypervisor has no ability to ever remove guest physical memory arbitrarily and swap it to a disk file, as VMware ESX does when it faces an acute shortage of machine memory. VMware ESX swapping selects pages at random for removal, and without any knowledge of how guest machine pages are used, the hypervisor can easily choose badly. The Microsoft Hyper-V developers chose not to implement any form of hypervisor swapping of machine memory to disk. For page replacement, Hyper-V relies solely on the virtual memory management capabilities of the guest OS, which is usually Windows, when there is a shortage of machine memory. Frankly, performance suffers under either approach when there is an extreme machine memory shortage – overloading machine memory is something to be avoided on both virtualization platforms. Hyper-V does have the virtue that machine memory management is simpler, relying on a single mechanism to relieve a machine memory shortage.

In both virtualization approaches, it is important to be able to understand the signs that the VM Host machine’s memory is over-committed. In Hyper-V, these include:

  • a shortage of Hyper-V Dynamic Memory\Available Memory
  • sustained periods where the Hyper-V Dynamic Memory\Average Memory Pressure measurements for one or more guest machines hovers near 100
  • internal guest machine measurements show high paging rates to disk (Memory\Pages/sec, Memory\Page-ins/sec)

Because ballooning transfers the decision about which pages to remove from guest physical memory to the guest OS, we need to revisit virtual memory concepts briefly in this new context. One goal of virtual memory management is to utilize physical memory efficiently, essentially filling up physical memory completely, aside from a small buffer of unallocated physical pages that are kept in reserve. Memory over-commitment works because processes frequently allocate more virtual memory than they need at any one moment in time. Consequently, it is usually not necessary to back every allocated virtual memory page with guest physical memory. Consider a guest machine that reports a Memory Pressure reading of 100 – in other words, its Committed Bytes = Visible Physical Memory. Typically, 10-20% of the machine’s committed pages are likely to be relatively inactive, which would allow the OS to remove them from physical memory without much performance impact.

Since virtual memory management by design tends to fill up physical memory, it is not uncommon for the OS to need to displace a currently allocated virtual page from physical memory to make room for a new or non-resident page that the process has just referenced from time to time. Windows implements an LRU page replacement policy, trimming older pages from process working sets when physical memory is in short supply. Windows and Linux guest machines manage virtual memory dynamically, keeping track of which of an application’s virtual pages are currently being accessed. Furthermore, the OS’s page replacement policy ages allocated virtual memory pages that have not been referenced in the current interval. The pages of a process that have not been referenced recently are usually better candidates for removal in favor of current pages.

The ballooning technique used in Hyper-V – and in VMware ESX, as well –  pushes the decision about which specific pages to remove down to the guest machine, which is in a far better position to select candidate pages for removal because the guest OS does maintain memory usage data. The term “ballooning” refers to a management thread running inside the guest machine that acquires empty physical memory buffers when the hypervisor signals that it wants to remove physical memory from the partition. This action can be thought of as the memory balloon inflating. Having once inflated, when Hyper-V decides to add memory back to the child partition, it deflates the balloon, freeing up balloon memory that was previously acquired.

In Hyper-V, ballooning is initiated by the Dynamic Memory Balancer, a task hosted inside the Root partition’s Virtual Machine Management Server (VMMS) component. Whenever the Dynamic Memory Balancer decides to adjust the amount of guest physical memory allotted to a guest machine, it communicates with the specific VM worker process running in the Root partition that maintains the state of the guest machine. If the decision is to remove memory, the VM worker process issues a message to request page removal that is communicated to the child partition across the VMBus.

The memory ballooning process used to reduce the size of guest physical memory is depicted in Figure 15. Inside the child partition, the Dynamic Memory VSC – also responsible for implementing the guest OS enlightenment that reports the number of guest OS committed bytes – responds to the remove memory request by making a call to the MmAllocatePagesForMdlEx API, which acquires memory from the non-paged pool. This pool of allocated physical memory, normally used by drivers for DMA devices that need access to physical addresses, is the “balloon” that inflates when Hyper-V determines it is appropriate to remove guest physical memory from the guest machine. The Dynamic Memory VSC then returns to the Root partition – via another VMBus message – a list of the Guest Physical addresses of the balloon pages that it has just acquired. The Root partition then signals the hypervisor that these pages are available to be added to a different partition.

HyperV memory balloon processing

Figure 15. The balloon driver is a Dynamic Memory VSC that responds to a VMBus request to remove memory by acquiring memory from the non-paged pool. The balloon driver then returns a list of physical memory pages that the hypervisor can immediately grant to a different virtual machine.

Since the balloon driver in the guest machine will pin the memory balloon pages in nonpaged physical memory until further notice, the physical memory pages in the guest machine balloon prove the exception to the rule that memory locations can only be occupied by one guest machine at a time. The pages in the balloon are set aside, remaining accessible from inside the guest machine; however, the balloon driver ensures that they are not actually accessed. This allows Hyper-V to grant the machine memory these balloon pages occupy to another guest machine to use.

From inside the guest Windows machine, the balloon inflating increases the amount of nonpaged pool memory that is allocated, as illustrated in Figure 16. Figure 16 reports on the size of the nonpaged Pool in a Windows guest during a period when the balloon inflates (shortly after 5 pm) and then deflates about an hour later.

Hyper-V guest machine ballooning

Figure 16. Inside the guest Windows machine, the balloon inflating corresponds to an increase in the amount of nonpaged pool memory that is allocated. In this example, the balloon deflates about 1 hour later.

As in VMware, ballooning itself has no guaranteed immediate impact on physical memory contention inside the Windows guest machine. So long as the guest machine has a sufficient supply of available pages, the impact remains minimal. Over time, however, ballooning can pin enough guest OS pages in physical memory to force the guest machine to execute its page replacement policy. In the case of Windows, this means that the OS will also issue a LowMemoryResourceNotification event, which triggers garbage collection in a .NET Framework application and a similar buffer manager trimming operation in SQL Server. On the other hand, if ballooning does not cause the guest OS machine to experience memory contention, i.e., if the balloon request can be satisfied without triggering the guest machine’s page replacement policy, there will be no visible impact inside the guest machine other than an increase in the size of the nonpaged Pool..

Hyper-V architecture: Dynamic Memory

This is the fourth post in a series on Hyper-V performance. The series begins here.

The hypervisor also contains a Memory Manager component for managing access to the machine’s physical memory, i.e., RAM. For the sake of clarity, when discussing memory management in the Hyper-V environment, I will call RAM machine memory, the Hyper-V host machine’s actual physical memory, to distinguish it from the view of virtualized physical memory granted to each partition. Guest machines never access machine memory directly. Each guest machine is presented with a range of Guest Physical memory addresses (GPA), based on its configuration definitions, that the hypervisor maps to machine memory with a set of page tables that the hypervisor maintains.

Machine memory cannot be shared in the same way that other computer resources like CPUs and disks can be shared. Once memory is in use, it remains 100% occupied until the owner of those memory locations frees it. The hypervisor’s Memory Manager is responsible for distributing machine memory among the root and child partitions. It can partition memory statically, or it can manage the allocation of memory to partitions dynamically. In this section, we will focus on the dynamic memory management capabilities of Hyper-V, an extremely valuable option from the standpoint of capacity planning and provisioning. Dynamic Memory, as the feature is known, enables Hyper-V to host considerably more guest machines, so long as these guest machines are not actively using all the Guest Physical Memory they are eligible to acquire.

The unit of memory management is the page, a fixed-size block of contiguous memory addresses. Windows supports standard 4K pages on Intel hardware and also uses some Large 2 MB pages in specific areas where it is appropriate. Hyper-V supports allocation using both page sizes. Pages of machine memory are either (1) allocated and in use by a child partition or (2) free and available for allocation on demand as needed.

Each guest machine assumes that the physical memory it is assigned is machine memory, and builds its own unique set of Guest Virtual Addresses (GVA) to Guest Physical addresses mappings – its own set of page tables. Both sets of page tables are referenced by the hardware during virtual address translation when a guest machine is running. As mentioned above, this hardware capability is known as Second Level Address Translation (SLAT). SLAT hardware makes virtualization much more efficient. Figure 8 illustrates the capability of SLAT hardware to reference both the hypervisor Page Tables that map machine memory and the guest machine’s Page Tables that map Guest Virtual Addresses to Guest Physical addresses during virtual address translation.

Hyper-V virtual memory management

Figure 8. Second Level Address Translation (SLAT) hardware and the tagged TLB are hardware optimizations that improve the performance of virtual machines.

Figure 8 illustrates another key hardware feature called tagged TLB that was specifically added to the Intel architecture to improve the performance of virtual machines. The Translation Lookaside Buffer (TLB) is a small, dedicated cache internal to the processor core containing the addresses of recently accessed virtual addresses and the corresponding machine memory addresses they are mapped to. In the processor hardware, virtual addresses are translated to machine memory addresses during instruction execution, and TLBs are extremely effective at speeding up that process. With virtualization hardware, each entry in the processor’s TLB is tagged with a virtual machine guest ID, as illustrated, so when the hypervisor Scheduler dispatches a new virtual machine, the TLB entries associated with the previously executing virtual machine can be identified and purged from the table.

Memory management for the Root partition is handled a little differently from the child partitions. The Root partition requires access to machine memory addresses and other physical hardware on the motherboard like the APIC to allow the Windows OS running in the Root partition to manage physical devices like the keyboard, mouse, video display, storage peripherals, and the network adaptor. But the Root partition is also a Windows machine that is capable of running Windows applications, so it builds page tables for mapping virtual addresses to physical memory addresses like a native version of the OS. In the case of the Root partition’s page tables, unlike any of the child partitions, physical addresses in the Root partition correspond directly to machine memory addresses. This allows the Root OS to access memory mapped for use by the video card and video driver, for example, as well as the physical memory accessed by other DMA device drivers. In addition, the hypervisor reserves some machine memory locations exclusively for its own use, which is the only machine memory that is off limits to the Root partition.

From a capacity planning perspective, it is important to remember that the Root partition requires some amount of Guest Physical Memory, too. You can see how much physical memory the Root is currently using by looking at the usual OS Memory performance counters.

Dynamic memory.

The Hyper-V hypervisor can adjust machine memory grants to guest machines up or down dynamically, a feature that is called dynamic memory. Dynamic memory refers to adjustments in the size of the Guest Physical address space that the hypervisor grants to a guest machine running inside a child partition. When dynamic memory is configured for a guest machine, Hyper-V can give a partition more physical memory to use or remove guest physical memory from a guest machine that doesn’t require it, ignoring for a moment the relative memory priority of the guests. With the dynamic memory feature of Hyper-V, you can pack significantly more virtual machines into the memory footprint of the VM host machine, although you still must be careful not to pack in too many guest machines and create a memory bottleneck that can impact all the guest machines that are resident on the Hyper-V Host.

When dynamic memory is enabled for a child partition, you set minimum and maximum Guest Physical memory values and allow Hyper-V to make adjustments based on actual physical memory usage. Figure 9 charts the amount of physical memory that is visible to one of the child partitions in a test scenario that I will be discussing. Starting around 5:45 pm, Hyper-V boosted the amount of Guest Physical Memory visible to this guest machine from approximately 4 GB to 8 GB, which was its maximum dynamic memory allotment. Figure 9 also shows two additional Hyper-V dynamic memory metrics, the cumulative number of Add and Remove memory operations that Hyper-V performed. Judging from the shape of the Add Memory Operations line graph, the measurement counts operations, not pages, so it is not a particularly useful measurement. In fact, once Dynamic Memory is enabled for one or more guest machines, memory adjustment operations occur on a more or less continuous basis. But understanding the rate that Memory Add and Remove operations are occurring provides no insight into the magnitude of those adjustments, which is reflected in the adjustments that are made in the amount of Guest Physical Memory that is visible to the partition.

Hyper-V guest machine visible physical memory(1)

Figure 9. Guest machines configured to use Dynamic Memory are subject to adjustments in the amount of Guest Physical Memory that is visible to them to access. Hyper-V adjusted the amount of physical memory visible to the TEST5 guest machine upwards from 4 GB to 8 GB beginning around 5:45 PM. 8 GB was the maximum amount of Dynamic Memory that was configured for the partition. This adjustment upwards occurred after two of the other guest machines being hosted were shutdown, freeing up the memory they had allocated.

 

The hypervisor makes decisions to add or remove physical memory from a guest machine based on measurements of how much virtual and physical memory the guest OS is actually using. A guest OS enlightenment reports the number of committed bytes, in effect, the number of virtual and physical memory pages that the Windows guest has constructed Page Table entries (PTEs) to address. Each guest machine’s committed bytes is then compared to how much physical memory it currently has available, a metric Hyper-V calls Memory Pressure, calculated using the formula:

Memory Pressure = (Guest machine Committed Memory / Visible Guest Physical Memory) * 100

Memory Pressure is a ratio of the virtual and physical memory allocated by the guest machine divided by the amount of physical memory currently allotted to the guest to address. For example, any guest machine with a Memory Pressure value less than 100 has allocated fewer pages of virtual and physical memory than its current physical memory allotment. Guest machines with a Memory Pressure measure greater than 100 have allocated more virtual memory than their current physical memory allotment and are at risk for higher demand paging rates, assuming all the allocated virtual memory is active.

Figure 10 reports the Current value of Memory Pressure for the guest machine shown in Figure 9 over an 8-hour period that includes the measurement interval used in the earlier chart. Just prior to Hyper-V’s Add Memory operation that increased the amount of Guest Physical Memory visible to the partition from 4 GB to 8 GB, the Memory Pressure was steady at 150. Working backwards from the measurements reported in Figure 9 that showed 4 GB of Guest Physical Memory visible to the guest and the corresponding Memory Pressure values shown in Figure 10, you can calculate the number of guest machine Committed Bytes:

   Committed Bytes = (Memory Pressure / 100) * Guest Physical Memory

So, in Figure 9, a Memory Pressure reading of 150 up until 5:45 pm means the guest machine had committed bytes of around 6 GB, a situation that left the guest machine severely memory constrained.

Memory pressure looking at a single vm

Figure 10. The Memory Pressure indicator is the ratio of guest Committed Bytes to visible Physical Memory, multiplied by 100. Prior to Hyper-V increasing the amount of Physical Memory visible to the guest from 4 GB to 8 GB around 5:45 PM, the Memory Pressure for the guest was 150.

Memory Pressure, then, corresponds to the ratio of virtual to physical memory that was discussed in the earlier chapter on Windows memory management where I recommended using it as a memory contention index for memory capacity planning. This is precisely the way Memory Pressure is used in Hyper-V. The Memory Pressure values calculated for the guest machines subject to dynamic memory management are then used to determine how to adjust the amount of Guest Physical memory granted to those guest machines.

It is also important to remember that the memory contention index is not always a foolproof indicator of a physical memory constraint in Windows. Committed Bytes as an indicator of demand for physical memory can be misleading. Windows applications like SQL Server and the Exchange Server store.exe process will allocate as much virtual memory to use as data buffers as possible, up to the limit of the amount of physical memory that is available. SQL Server then listens for low and high memory notifications issued by the Windows Memory Manager to tell it when it is safe to acquire more buffers or when it needs release older, less active ones. The .NET Framework functions similarly. In a managed Windows application, the CLR listens for low memory notifications and triggers a garbage collection run to free used virtual memory in any of the managed Heaps when the low memory signal is received. What makes Hyper-V dynamic memory interesting is that these process-level dynamic virtual memory management adjustments that SQL Server and .NET Framework applications use continue to operate when Hyper-V adds or removes memory from the guest machine.

Memory Buffer

Another dynamic memory option pertains to the size of the buffer of free machine memory pages that Hyper-V maintains on behalf of each guest in order to speed up memory allocation requests. By default, the hypervisor maintains a memory buffer for each guest machine subject to dynamic memory that is 20% of the size of its grant of physical guest memory, but that parameter can be adjusted upwards or downwards for each guest machine. On the face of it, the Memory Buffer option looks useful for reducing the performance risks associated with under-provisioned guest machines. So long as the Memory Buffer remains stocked with free machine memory pages, operations to add physical memory to a child partition are able to be satisfied quickly.

Hyper-V available memory chart

Figure 14. Hyper-V maintains Available Memory to help speed memory allocation requests.

A single performance counter is available that tracks overall Available Memory, which is compared to the average Memory Pressure in Figure 14. Intuitively, maintaining some excess capacity in the hypervisor’s machine memory buffer seems like a good idea. When the Available Memory buffer is depleted, in order for the Hyper-V Dynamic Memory Balancer to add memory to a guest machine, it first has to remove memory from another guest machine, an operation which does not take effect immediately. Generating an alert based on Available Memory falling below a threshold value of 3-5% of total machine memory is certainly appropriate. Unfortunately, Hyper-V does not provide much information or feedback to help you make adjustments to the tuning parameter and understand how effective the Memory Buffer is.

 

 

 .

Hyper-V architecture: Intercepts, Interrupts and Hypercalls.

This is the third post in a series on Hyper-V performance. The series begins here.

Three interfaces exist that allow for interaction and communication between the hypervisor, the Root partition and the guest partitions: intercepts, interrupts, and the direct Hypercall interface. These interfaces are necessary for the virtualization scheme to function properly, and their usage accounts for much of the overhead virtualization adds to the system. Hyper-V measures and reports on the rate these different interfaces are used, which is, of course, workload dependent. Frankly, the measurements that show the rate that the hypervisor processes interrupts and Hypercalls is seldom of interest outside the Microsoft developers working on Hyper-V performance itself. But these measurements do provide insight into the Hyper-V architecture and can help us understand how the performance of the applications running on guest machines is impacted due to virtualization. Figure 3 is a graph showing these three major sources of virtualization overhead in Hyper-V.

Hyper-V overheads

Figure 3. Using the Hyper-V performance counters, you can monitor the rate that intercepts, virtual interrupts, and Hypercalls are handled by the hypervisor and various Hyper-V components.

Intercepts.

Intercepts are the primary mechanism used to maintain a consistent view of the virtual processor that is visible to the guest OS. Privileged instructions and other operations issued by the guest operating system that are  valid when the OS is accessing the native hardware need to be intercepted by the hypervisor and handled in a way that maintains a consistent view of the virtual machine. Intercepts make use of another hardware assist – the virtualization hardware that allows the hypervisor to intercept certain operations. Intercepts include the guest machine OS

  • issuing a CPUID instruction to identify the hardware characteristics
  • accessing machine-specific registers (MSRs)
  • accessing I/O ports directly
  • instructions that cause hardware exceptions when executed that must be handled by the OS

When these guest machine operations are detected by the hardware, control is immediately transferred to the hypervisor to resolve. For example, if the guest OS believes it is running on a 2-way machine and issues a CPUID instructions, Hyper-V intercepts that instruction and, through the intercept mechanism, supplies a response that is consistent with the virtual machine image. Similarly, whenever a guest OS issues an instruction to read or update a Control Register (CR) or a Machine-Specific Register (MSR) value, this operation is intercepted, and control is transferred to the parent partition where the behavior the guest OS expects is simulated.

Resolving intercepts in Hyper-V is a cooperative process that involves the Root partition. When a virtual machine starts, the Root partition makes a series of Hypercalls that establish the intercepts it will handle, providing a call back address that the hypervisor uses to signal the Root partition when that particular interception occurs. Based on the virtual machine state maintained in the VM worker process, the Root partition will then simulate the requested operation, and then allow the intercepted instruction to complete its execution.

Hyper-V is instrumented to report the rate that several categories of intercepts occur. Some intercepts occur infrequently, like issuing CPUID instructions, something the OS needs to do rarely. Others like Machine-Specific Register access are apt to occur more frequently, as illustrated in Figure 4, which compares the rate of MSR accesses to the overall intercept rate, summed over all virtual processors for a Hyper-V host machine.

MSR access intercepts per second graph

Figure 4. The rate MSR intercepts are processed, compared to the overall intercept rate (indicated by an overlay line graphed against the secondary, right-hand y-axis).

In order to perform its interception functions, the Root partition’s VM worker process maintains a record of the virtual machine state. This includes keeping track of the virtual machine’s registers each time there is an interrupt, plus maintaining a virtual APIC for interrupt handling, as well as additional virtual hardware interfaces, what some authors describe as a “virtual motherboard” of devices representing the full simulated guest machine hardware environment.

Interrupts.

Guest machines accessing the synthetic disk and network devices that are installed are presented with a virtualized interrupt handling mechanism. Compared to native IO, this virtualized interrupt process adds latency to guest machine disk and network IO requests to synthetic devices. Latency increases because device interrupts need to be processed twice, once in the Root partition, and again in the guest machine. Latency also increases when interrupt processing at the guest machine level is deferred because none of the virtual processors associated with the guest are currently dispatched.

To support guest machine interrupts, Hyper-V builds and continuously maintains a synthetic interrupt controller associated with the guest’s virtual processors. When an external interrupt generated by a hardware device attached to the Host machine occurs because the device has completed a data transfer operation, the interrupt is directed to the Root partition to process. If the device interrupt is found to be associated with a request that originated from a guest machine, the guest’s synthetic interrupt controller is updated to reflect the interrupt status, which triggers action inside the guest machine to respond to the interrupt request. The device drivers loaded on the guest machine are suitably “enlightened” to skip execution of as much redundant logic as possible during this two-phased process.

The first phase of interrupt processing occurs inside the Root partition. When a physical device raises an interrupt that is destined for a guest machine, the Root partition handles the interrupt in the Interrupt Service Routine (ISR) associated with the device immediately in the normal fashion. When the device interrupt is in response to a disk or network IO request from a guest machine, there is a second phase of interrupt processing that occurs associated with the guest partition. The second phase, which is required because the guest machine also must handle the interrupt, increases the latency of every IO interrupt that is not processed directly by the child partition.

An additional complication arises if none of the guest machine’s virtual processors are currently dispatched. If no guest machine virtual processor is executing, then interrupt processing on the guest is deferred until one of its virtual processors is executing. In the meantime, the interrupt is flagged as pending in the state machine maintained by the Root partition. The amount of time that device interrupts are pending also increases the latency associated with synthetic disk and network IO requests initiated by the guest machine.

The increased latency associated with synthetic device interrupt-handling can have a very serious performance impact. It can present a significant obstacle to running disk or network IO-bound workloads as guest machines. The problem is compounded because the added delay and its impact on an application is difficult to quantify. The Logical Disk and Physical Disk\Avg. Disk secs/Transfer counters on the Root partition are not always reliably capable of measuring the disk latency associated with the first phase of interrupt processing because Root partition virtual processors are also subject to deferred interrupt processing and virtualized clocks and timers. The corresponding guest machine Logical Disk and Physical Disk\Avg. Disk secs/Transfer counters are similarly burdened. Unfortunately, a careful analysis of the data shows it is not clear that any of the Windows disk response time measurements are valid under Hyper-V, even disk devices that are natively attached to the guest partition.

The TCP/IP networking stack, as we have seen in our earlier look at NUMA architectures, has a well-deserved reputation for requiring execution of a significant number of CPU instruction in processing Network IO. Consequently, guest machines that handle a large amount of network traffic are subject to this performance impact when running virtualized. The guest machine synthetic network driver enlightenment helps considerably with this problem, as do NICs featuring TCP offload capabilities. Network devices that can be attached to the guest machine in native mode are particularly effective performance options in such cases.

In general, over-provisioning processor resources on the VM Host is an effective mitigation strategy to limit the amount and duration of deferred interrupt processing delays that occur for both disk and network IO. Disk and network hardware that can be directly attached to the guest machine is certainly another good alternative. Interrupt processing for disk and network hardware that is directly attached to the guest is a simpler, one-phase process, but one that is also subject to pending interrupts whenever the guest’s virtual processors are themselves delayed. The additional latency associated with disk and network IO is one of the best reasons to run a Windows machine in native mode.

VMBus

Guest machine interrupt handling relies on an inter-partition communications channel called the VMBus, which makes use of the Hypercall capability that allows one partition to signal another partition and send messages. (Note that since child partitions have no knowledge of other child partitions, this Hypercall signaling capability is effectively limited to use by the child partition and its parent, the Root partition.) Figure 5 illustrates the path taken when a child partition initiates a disk or network IO to a synthetic disk or network device installed in the guest machine OS. IOs to synthetic devices are processed by the guest machine device driver, which is enlightened, as discussed above. The synthetic device driver passes the IO request to another Hyper-V component installed inside the guest called a Virtualization Service Client (VSC). The VSC inside the guest machine translates the IO request into a message that is put on the VMBus.

The VMBus is the mechanism used for passing messages between a child partition and its parent, the Root partition. Its main function is to provide a high bandwidth, low latency path for the guest machine to issue IO requests and receive replies. According to Mark Russinovich, writing in Windows Internals, one message-passing protocol the VMbus uses is a ring of buffers that is shared by the child and partition: “essentially an area of memory in which a certain amount of data is loaded on one side and unloaded on the other side.” Russinovich’s book continues, “No memory needs to be allocated or freed because the buffer is continuously reused and simply rotated.” This mechanism is good for message passing between the partitions, but is too slow for large data transfers due to the necessity to copy data to and from the message buffers.

Another VMBus messaging protocol uses child memory that is mapped directly to the parent partition address space. This direct memory access VMBus mechanism allows disk and network devices managed by the Root partition to reference buffers allocated in a child partition. This is the technique Hyper-V uses to perform bulk data IO operations for synthetic disk and network devices. For the purpose of issuing IO requests to native devices, the Root partition is allowed to access machine memory addresses directly. In addition, it can request the hypervisor to translate guest machine virtual addresses allocated for use as VMBus IO buffers into machine addresses that can be referenced by the physical devices supporting DMA that are attached to the Root.

Inside the Root partition, Hyper-V components known as Virtualization Service Providers (VSPs) receive IO requests from synthetic devices from the guest machines and translate them into physical disk and network IO requests. Consider, for example a guest partition request to read or write a .vhdx file that the VSP must translate into a disk IO request to the native file system on the Root. These translated requests are then passed to the native disk IO driver or the networking stack installed inside the Root partition that manages the physical devices. The VSPs also interface with the VM worker process that is responsible for the state machine that represents the virtualized physical hardware presented to the guest OS. Using this mechanism, interrupts for guest machine synthetic devices can be delivered properly to the appropriate guest machine.

When the native device completes the IO operation requested, it raises an interrupt that the Root partition handles normally. This process is depicted in Figure 5. When the request corresponds to one issued by a guest machine, what is different under Hyper-V is that a waiting thread provided by the VSP and associated with that native device is then awakened by the device driver. The VSP also ensures that the device response adheres to the form that the synthetic device driver on the guest machine expects. It then uses the VMBus inter-partition messaging mechanism to signal the guest machine that has an interrupt pending.

HyperV interrupt processing

Figure 5. Synthetic interrupt processing involves the Virtualization Service Provider (VSP) associated with the device driver invoked to process the interrupt. Data acquired from the device is transferred directly into guest machine memory using a VMBus communication mechanism, where it is processed by the Virtualization Service Client (VSC) associated with the synthetic device.

From a performance monitoring perspective, the Hyper-V hypervisor reports on the overall rate of virtual interrupt processing, as illustrated in Figure 6. The hypervisor, however, has no understanding of what hardware device is associated with each virtual interrupt. It can report the number of deferred virtual interrupts, but it does not report the amount of pending interrupt delay, which can be considerable. The measurement components associated with disk and network IO in the Root partition function normally, with the caveat that the disk and network IO requests counted by the Root partition aggregate all the requests from both the Root and child partitions. Windows performance counters inside the guest machine continue to provide an accurate count of disk and network IO and the number of bytes transferred for that partition. The guest machine counters are useful when the Root disks or network interface cards are saturated to identify which guest partitions are responsible for the overload. Later on, we will review some examples that illustrate how all these performance counters function under Hyper-V.

Hyper-V virtual interrupts chart

Figure 6. Virtual interrupt processing per guest machine virtual processor. The rate of pending interrupts is displayed as a dotted line plotted against the secondary y-axis. In this example, approximately half of all virtual interrupts are subject to deferred interrupt processing delays.

 

Hypercalls.

The Hypercall interface provides a calling mechanism that allows child partitions to communicate with the Root partition and the hypervisor. Some of the Hypercalls support the guest OS enlightenments mentioned earlier. Others are used by the Root partition to communicate requests to the hypervisor to configure, start, modify, and stop child partitions. There is another set of Hypercalls used in dynamic memory management, which is discussed below. Hypercalls are also defined to enable the hypervisor log events and post performance counter data back to the Root partition where it can be gathered by Perfmon and other similar tools.

Hypercalls per second graph

Figure 7. Monitoring the rate Hypercalls are being processed.

.