Hyper-V performance: CPU Priority scheduling options

Finally, let’s look at how effective the Hyper-V processor scheduling priority settings are at insulating preferred guest machines from the performance impact of an under-provisioned (or over-committed) Hyper-V Host machine. As discussed earlier, Hyper-V virtual processor scheduling options include allowing you to prioritize the workloads from guest machines that are resident on the same Hyper-V Host. To test the effectiveness of these priority scheduling options, I re-ran the under-provisioned 4 X 2-way guest machine scenario with two of the guest machines set to run at a higher priority, while the other two guests were set to run at a low priority. I ran separate tests to evaluate the virtual processor Reservation settings in one scenario and the use of relative weights in another scenario.

Configuration #

guest

machines

CPUs

per machine

Best case elapsed time stretch factor
Native machine 4 90
4 Guest machines (no priority) 4 2 370 4.08
4 Guest machines with Relative Weights 4 2 230 2.56
4 Guest machines with Reservations 4 2 270 3.00

CPU Scheduling with Reservations.

For the Reservation scenario, the two high priority guest machines reserved 50% of the virtual processor capacity they were configured with. The two low priority guest machines reserved 0% of their virtual processor capacity. Figure 34 shows the Hyper-V Manager’s view of the situation – the higher priority machines 1 & 2 clearly have favored access to the Hyper-V logical processors. The two higher priority guests are responsible for 64% of the CPU usage, while the two low priority machines are consuming just 30% of the processor resources. The guest machines configured with high priority settings executed to completion in about 270 minutes (or 4 ½ hours). This was about 27% faster than the equally weighted guest machines in the baseline scenario where four guest machines executed the benchmark program without any priority settings in force.

HyperV logical processor reservation scenario screenshot

Figure 34. The Hyper-V Manager’s view of overall CPU Usage during the Reservation scenario. Together, the higher priority machines 1 & 2 are responsible for 64% of the CPU usage, while the two low priority machines are consuming just 30% of the CPU capacity.

Figure 35 reports on the distribution of the Virtual Processor utilization for the four guest machines executing in this Reservation scenario during a one-hour period. Guest machines 1 & 2 are running with the 50% Reservation setting, while machines 3 & 4 are running with the 0% Reservation setting. Instead of the view in Figure 32 where each guest machine has equal access to virtual processors, the high priority guest machines clearly have favored access to virtual processors. Together, the 4 higher priority virtual processors consumed about 250% out of a total of 400% virtual processor capacity, almost twice the amount of residual processor capacity available to the lower priority guest machines.

reservation scenario virtual processor utilization

Figure 35. Virtual Processor utilization for the four guest machines executing in the Reservation scenario.

Hours later when the two high priority guest machines finished executing the benchmark workload, those guest machines went idle and the low priority guests were able to consume more virtual processor capacity. Figure 36 shows these higher priority guest machines executing the benchmark workload until about 10:50 pm, at which point the Test 1 & 2 machines go idle and machines 3 & 4 quickly expand their processor usage.

reservation scenario virtual processor utilization2

Figure 36. The higher priority the Test 1 & 2 machines go idle about 10:50 pm, at which point machines 3 & 4 quickly expand their processor usage.

As Figure 36 indicates, even though the high priority Test machines 1 & 2 are idle, there virtual processors still get scheduled to execute on the Hyper-V physical CPUs. When guest machines do not consume all of the virtual processor capacity that is requested in a Reservation setting, that excess capacity is available for lower priority guest machines to use.

Figures 37 and 38 show the view of processor utilization available from inside one of the high priority guest machines. Figure 37 shows the view of the virtual hardware that the Windows CPU accounting function provides, plus it shows the instantaneous Processor Ready Queue measurements. These internal measurements indicate that the virtual processors are utilized near 100% and there is a significant backlog of Ready worker threads from the benchmark workload queued for the two virtual CPUs.

reservation scenario favored guest machine processor queuing

Figure 37. Internal Windows performance counters indicate that the virtual processors are utilized near 100%, with a significant backlog of Ready worker threads from the benchmark workload queued for the two virtual CPUs.

Figure 37 shows the % Processor Time counter from the guest machine Processor object, while Figure 38 shows processor utilization for the top 5 most active processes, with the ThreadContentionGenerator.exe – the benchmark program – predominant.

reservation scenario favored guest machine processor utilization per process

Figure 38. The benchmark program ThreadContentionGenerator.exe consumes all the processor cycles available to the guest machine.

 

CPU Scheduling with Relative Weights.

A second test scenario used Relative Weights to prioritize the guest machines involved in the test, leading to results very similar to the Reservation scenario. Two guest machine were given high priority scheduling weights of 200, while the other two guest machine were given low priority scheduling weights of 50. This is the identical weighting scheme described in the earlier CPU weight example. Mathematically, the proportion of each virtual processor allocated for the higher priority guest machines was 80%, with 20% of the processor capacity allocated to the lower priority guests. In actuality, Figure 39 reports each high priority virtual processor consuming about 75% of a physical CPU, while the four lower priority virtual processors consumed slightly more than 20% of a physical CPU.

Since the higher priority guest machines were able to consume more processor time than in the Reservation scenario, the higher priority machines were able to complete the benchmark task in 230 minutes, faster than the best case in the Reservation scenario and about 38% faster than the baseline scenario where all four guests ran at the same Hyper-V scheduling priority.

weighted Scheduler scenario virtual processor utilization

Figure 39. In the Relative Weights scenario, each high priority virtual processor consumed about 75% of a physical CPU, while the four lower priority virtual processors consumed slightly more than 20% of a physical CPU.

As in the Reservation scenario, once the high priority guest machines completed their tasks and went idle, the lower priority guest machines gained greater access to the physical CPUs on the Hyper-V Host machine. This shift is highlighted in Figure 40, which shows the higher priority virtual processors for guest machines 1 & 2 tailing off at around 1:40 pm, which allows the processor usage from the lower priority virtual processors to take off at that point. The CPU usage pattern in Figure 40 showing this shift taking place during the Relative Weights scenario is very similar to the Reservation scenario shown in Figure 36.

weighted Scheduler scenario virtual processor utilization1

Figure 40. When the higher priority virtual processors for guest machines 1 & 2 finish processing 1:40 pm, the processor usage by the lower priority virtual processors accelerates.

 

Tagged . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *