Using huge pages can lead to improved performance for some applications by reducing the number of Translation Lookaside Buffer (TLB) entries needed to describe packet buffers and therefore minimize TLB ‘thrashing’. Huge pages also deliver many packets buffers, but consume only a single entry in the buffer table. Explicit huge pages are recommended.
Onload is able to use a total of 4096 huge pages.
The current huge page allocation can be checked by inspection of /proc/meminfo:
cat /proc/meminfo | grep Huge
This should return something similar to:
AnonHugePages: 2048 kB
HugePages_Total: 2050
HugePages_Free: 2050
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
The total number of huge pages available on the system is the value HugePages_Total
. The following command can be used to
dynamically set and/or change the number of huge pages allocated on a system to
<N> (where <N> is a non-negative integer):
echo <N> > /proc/sys/vm/nr_hugepages
On a NUMA platform, the kernel will attempt to distribute the huge page pool over the set of all allowed nodes specified by the NUMA memory policy of the task that modifies nr_hugepages
. The following command can be used to check the per node distribution of huge pages in a NUMA system:
cat /sys/devices/system/node/node*/meminfo | grep Huge
Huge pages can also be allocated on a per-NUMA node basis (rather than have the huge pages allocated across multiple NUMA nodes). The following command can be used to allocate <N> huge pages on NUMA node <M>:
echo <N> > /sys/devices/system/node/node<M>/hugepages/hugepages-2048kB/nr_hugepages