I asked a similar question years ago.
Now, my machine has four 1G hugepages and 256 2MB hugepages:
# cat /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
4
# cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
256
But then numstat -vm
shows:
Per-node system memory usage (in MBs):
Node 0 Total
--------------- ---------------
MemTotal 65205.89 65205.89
MemFree 58656.55 58656.55
MemUsed 6549.34 6549.34
Active 158.62 158.62
Inactive 89.90 89.90
Active(anon) 15.32 15.32
Inactive(anon) 9.04 9.04
Active(file) 143.30 143.30
Inactive(file) 80.86 80.86
Unevictable 10.69 10.69
Mlocked 10.69 10.69
Dirty 0.02 0.02
Writeback 0.00 0.00
FilePages 235.87 235.87
Mapped 16.08 16.08
AnonPages 23.42 23.42
Shmem 9.43 9.43
KernelStack 5.38 5.38
PageTables 2.84 2.84
NFS_Unstable 0.00 0.00
Bounce 0.00 0.00
WritebackTmp 0.00 0.00
Slab 50.60 50.60
SReclaimable 23.14 23.14
SUnreclaim 27.46 27.46
AnonHugePages 0.00 0.00
HugePages_Total 4096.00 4096.00
HugePages_Free 4096.00 4096.00
HugePages_Surp 0.00 0.00
Based on the answer for my previous post, 4096 should be an "unit". Now, I am confused. Unit of what here?
It seems to me the "unit" is "MB" and numastat
doesn't include those 2MB hugepages?
And why 2MB hugepages aren't reported here?
Huge pages on Linux isn't the easiest to understand. Especially when some tools show things others do not, and everyone's doing their own unit conversions.
System wide
/proc/meminfo
will show the sum of all sizes of large pages asHugetlb
numastat -m will output "meminfo-like" based on per NUMA node stats in
/sys/devices/system/node/node?/meminfo
but it also converts units to MB. I don't know why this apparently lacks a sum of all sizes. Maybe the kernel punted on this and lets user tools do what they want with per node data? Presumably the output you got is only the 4x 1GB pages.hugeadm (from libhugetlbfs) bases its recommended shmmax by summing each of the page sizes in
/sys/kernel/mm/hugepages/
.hugeadm --explain
is also useful to check default and size of each pool.Using only one huge page size might be simpler to operate. Less than 5 GB of 2 MB pages is relatively small, these could all be 2 MB. 1 GB page size works, but could be an inefficient use of space for small allocations.