NUMA commands
numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23
node 0 size: 131045 MB
node 0 free: 34018 MB
node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31
node 1 size: 131072 MB
node 1 free: 15748 MB
node distances:
node 0 1
0: 10 11
1: 11 10
[root@islpfdkvm12 ~]# free -mt
total used free shared buffers cached
Mem: 258451 208978 49472 0 309 141901
-/+ buffers/cache: 66768 191682
Swap: 9999 458 9541
Total: 268451 209436 59014
[root@islpfdkvm12 ~]# numastat
node0 node1
numa_hit 155199455 157169588
numa_miss 18363861 35640093
numa_foreign 35640093 18363861
interleave_hit 57466 57452
local_node 155198395 157104460
other_node 18364921 35705221
[root@islpfdkvm12 ~]# numad
Looks like transparent hugepage scan time in /sys/kernel/mm/redhat_transparent_hugepage/khugepaged/scan_sleep_millisecs is 10000 ms.
Consider increasing the frequency of THP scanning,
by echoing a smaller number (e.g. 100) to /sys/kernel/mm/redhat_transparent_hugepage/khugepaged/scan_sleep_millisecs
to more aggressively (re)construct THPs. For example:
# echo 100 > /sys/kernel/mm/redhat_transparent_hugepage/khugepaged/scan_sleep_millisecs
Comments
Post a Comment