vSphere High Performance Cookbook
上QQ阅读APP看书,第一时间看更新

Keeping memory free for VMkernel

The amount of memory the VMkernel will try to keep free can be achieved through the Mem.MemMinFreePct parameter. MemMinFreePct determines the amount of memory that the VMkernel should keep free. vSphere 4.1 introduced a dynamic threshold of the Soft, Hard, and Low state to set appropriate thresholds and prevent virtual machine performance issues, while protecting VMkernel. The different states, based on %pRAM which is still free, determines what type of memory reclamation techniques are being used.

For MemMinFreePct, using a default value of 6 percent can be inefficient when 256 gigabyte or 512 gigabyte systems are becoming more and more mainstream. A 6 percent threshold on a 512 gigabyte results in 30 gigabyte idling most of the time. However, not all customers use large systems; some prefer to scale out rather than to scale up. In this scenario, a 6 percent MemMinFreePct might be suitable. To have the best of both worlds, VMkernel uses a sliding scale to determine the Mem.MemMinFreePct threshold based on the amount of RAM installed in vSphere 5 hosts. Sliding scale is not applicable to vSphere 4.

Getting ready

To step through this recipe, you will need a running ESXi Server and a working installation of vSphere Client. No other prerequisites are required.

How to do it...

  1. VMkernel uses a sliding scale to determine the Mem.MinFreePct threshold based on the amount of RAM installed in the host and it is automatic. However, if you need to change this behavior and set something on your own, then follow these steps:
  2. Open vSphere Client.
  3. Log in to the vCenter Server.
  4. On your Home Screen, select Hosts and Clusters.
  5. Choose the ESXi host where you want to perform this activity.
  6. Go to the Configuration tab, and click on Advanced Settings.
  7. In the Memory section, scroll down and locate Mem.MemMinFreePct.
  8. Choose a value between 0 to 50, where 0 indicates automatic.

So here you can set the percentage of host memory to reserve for accelerating memory allocations when free memory is low, which means this percentage determines when memory reclamation techniques (besides TPS) will start to be used.

The following is a sample screenshot when you configure this parameter:

How it works...

MemMinFreePct is used to calculate the minimum free memory which we want to keep by reclaiming memory. For systems with smaller memory (0 to 4 gigabytes), we want to keep 6 percent free, otherwise memory requests from VMkernel or VMs might not be fulfilled.

Now for the systems having relatively more memory than previously mentioned (4 to 12 gigabyte), we want to keep 4 percent free. For systems having memory ranging from (12 to 28 gigabyte), we want to keep the Free State threshold at 2 percent.

The thresholds for the "high", "soft", and "hard" states are about performance and each state corresponds to a successively lower amount of free pRAM. The main intention is to kick off ballooning and other reclamation mechanisms before hitting the low state.

So, in a nutshell, the MemMinFreePct parameter defines the minimal desired amount of free memory in the system. Falling below this level causes the system to reclaim memory through ballooning or swapping.

So, the amount of memory the VMkernel keeps free is controlled by the value of MemMinFreePct, which is now determined using a sliding scale. This means that when free memory is greater than or equal to the derived value, the host is not under memory pressure. Check out the following points. Note that these are based on vSphere 4.x.

  • 6 percent free (High): Split small pages for TPS (if applicable); begin ballooning.
  • 4 percent free (Soft): Ballooning in full swing; in vSphere 4.1 begin compressing virtual memory.
  • 2 percent free (Hard): VM swap; break large pages for TPS in full swing.
  • 1 percent free (Low): No new pages provided to VMs.
Note

Even if a host is under memory pressure, it just means that less free pRAM is available than is preferred. It does not mean that a performance problem is present, currently. Because VMs often have extra vRAM, and because the hypervisor doesn't know how much vRAM is considered free by the guest operating system, there is often pRAM allocated to back vRAM, which is holding junk data and which could be freed for use elsewhere without any negative performance impact.