The problem
A server crashed after multiple "Out of memory" messages and I am trying to pinpoint the culprit. If it is in userland - which process. If it is in the kernel - which kernel module.
Details
I am trying to find out how to use the crash utility to investigate what triggered an OOM on a server.
As part of the installation of a fresh pair of servers, I started the initialization of a 14TB DRBD device. Around that time, while playing with the DRBD syncer rate configuration, and bringing some of the bonded network interfaces up and down, one of the servers crashed. Over a 30 seconds period it produced 39 Out of memory: Kill process ####
messages. Then it crashed with:
Kernel panic - not syncing: Out of memory and no killable processes...
The system crash triggered a kdump. Now I have a nice vmcore.flat
file which should be straightforward to use to investigate the issue, but I have hard time finding out where all the memory went.
The only resource I am aware of is Dedoimedo's site, which has nice instructions, and the Kernel Crash Book. These also happen to be the only resources suggested in the answers so I assume that crash
is the only way to investigate.
If there is another way to do a post-mortem on the incident, I am willing to accept it. It is just that crash
is the only utility that I am aware of. All I have right now is the vmcore.flat
file, and all I need to know is which component ate up all that memory. I am suspecting a kernel module problem, and more specifically one of the bonding module (as it got triggered as I brought an interface down), the DRBD module (version 8.3.15 built out of tree on CentOS 6.3), or one of the 10G ethernet modules (mlnx_en
built out of tree which is the interface I brought down, or the in-tree bnx2x
which is the interface that remained active). All I need to know is if there is a way to verify my suspicion.
So far, I only managed to extract the following information with the crash utility:
Checked how much memory is used
$ crash /usr/lib/debug/lib/modules/2.6.32-279.5.2.el6.x86_64/vmlinux vmcore.flat
....
crash> kmem -i
PAGES TOTAL PERCENTAGE
TOTAL MEM 16482587 62.9 GB ----
FREE 54610 213.3 MB 0% of TOTAL MEM
USED 16427977 62.7 GB 99% of TOTAL MEM
SHARED 4683 18.3 MB 0% of TOTAL MEM
BUFFERS 118 472 KB 0% of TOTAL MEM
CACHED 82 328 KB 0% of TOTAL MEM
SLAB 46635 182.2 MB 0% of TOTAL MEM
TOTAL SWAP 0 0 ----
SWAP USED 0 0 100% of TOTAL SWAP
SWAP FREE 0 0 0% of TOTAL SWAP
Obviously, it ran out of memory. All 64G is gone... but where?
Tried to see see if any processes are leaking memory
The only command that seems relevant is ps
(this is the ps
subcommand of crash
). It shows nothing abnormal, but it also shows nothing about kernel threads.
crash> ps
PID PPID CPU TASK ST %MEM VSZ RSS COMM
0 0 0 ffffffff81a8d020 RU 0.0 0 0 [swapper]
> 0 0 1 ffff88102c456040 RU 0.0 0 0 [swapper]
> 0 0 2 ffff88082c772aa0 RU 0.0 0 0 [swapper]
> 0 0 3 ffff88102c456aa0 RU 0.0 0 0 [swapper]
0 0 4 ffff88082c7b8ae0 RU 0.0 0 0 [swapper]
> 0 0 5 ffff88102c457500 RU 0.0 0 0 [swapper]
> 0 0 6 ffff88082c7d6aa0 RU 0.0 0 0 [swapper]
> 0 0 7 ffff88102c506080 RU 0.0 0 0 [swapper]
> 0 0 8 ffff88082c016ae0 RU 0.0 0 0 [swapper]
> 0 0 9 ffff88102c506ae0 RU 0.0 0 0 [swapper]
> 0 0 10 ffff88082c05caa0 RU 0.0 0 0 [swapper]
> 0 0 11 ffff88102c507540 RU 0.0 0 0 [swapper]
> 0 0 12 ffff88082c09cae0 RU 0.0 0 0 [swapper]
.....
4926 1 5 ffff880828a38ae0 ?? 0.0 0 0 mingetty
4928 1 1 ffff88102a4e8040 ?? 0.0 0 0 mingetty
4930 1 19 ffff880827af4080 ?? 0.0 0 0 mingetty
4932 1 2 ffff88100f122040 ?? 0.0 0 0 mingetty
4934 1 18 ffff8810296ea080 ?? 0.0 0 0 mingetty
4936 1047 4 ffff880ff342d540 IN 0.0 11184 948 udevd
4937 1047 5 ffff88082a240080 IN 0.0 11184 948 udevd
5060 3772 2 ffff88082881d540 ?? 0.0 0 0 sshd
5078 1 1 ffff88100f060ae0 ?? 0.0 0 0 sshd
5079 1 1 ffff88082b882ae0 ?? 0.0 0 0 bash
And if I take out the kernel threads (which all show zeroes for %MEM anyway), we can see that I was left with almost nothing running in the end:
crash> ps -u
PID PPID CPU TASK ST %MEM VSZ RSS COMM
1 0 1 ffff88082c41b500 ?? 0.0 19348 348 init
1047 1 2 ffff881029524040 IN 0.0 11188 948 udevd
3171 1 3 ffff880826ccaaa0 IN 0.0 27636 240 auditd
3172 1 17 ffff881029d1b500 IN 0.0 27636 240 auditd
> 3772 1 0 ffff88102b257500 RU 0.0 64072 668 sshd
4800 1 0 ffff88100f061540 ?? 0.0 0 0 dsm_om_shrsvcd
4842 1 16 ffff88100f012ae0 ?? 0.0 0 0 cmcld
4854 1 17 ffff88082a241540 ?? 0.0 0 0 cmlogd
4855 1 3 ffff88082796cae0 ?? 0.0 0 0 cmfileassistd
4856 1 18 ffff88082809d500 ?? 0.0 0 0 cmnetd
4860 1 0 ffff88082705aae0 ?? 0.0 0 0 cmresourced
4924 1 9 ffff88102a4e8aa0 ?? 0.0 0 0 mingetty
4926 1 5 ffff880828a38ae0 ?? 0.0 0 0 mingetty
4928 1 1 ffff88102a4e8040 ?? 0.0 0 0 mingetty
4930 1 19 ffff880827af4080 ?? 0.0 0 0 mingetty
4932 1 2 ffff88100f122040 ?? 0.0 0 0 mingetty
4934 1 18 ffff8810296ea080 ?? 0.0 0 0 mingetty
4936 1047 4 ffff880ff342d540 IN 0.0 11184 948 udevd
4937 1047 5 ffff88082a240080 IN 0.0 11184 948 udevd
5060 3772 2 ffff88082881d540 ?? 0.0 0 0 sshd
5078 1 1 ffff88100f060ae0 ?? 0.0 0 0 sshd
5079 1 1 ffff88082b882ae0 ?? 0.0 0 0 bash
5257 1 1 ffff8808279e6aa0 ?? 0.0 0 0 jnx_mlnxsnmp_da
Update:
Including some more output, as suggested by Soham. Unfortunately I cannot derive any further conclusions from it. The best I can do is suspect something in the kernel leaking memory, since the userland processes are almost all dead anyway.
The (almost full) output of log -m
is here
crash> ps -G | tail -n +2 | cut -b2- | gawk '{mem += $8} END {print "total " mem/1048576 "GB"}'
total 0.00391006GB
Note that almost all userland processes are dead at this time, so the low reported usage is expected.
Out of memory messages:
As I mentioned above, there are 39 "Out of memory:" messages, and here they are:
crash> log -m | grep Out
<3>[ 223.556616] Out of memory: Kill process 3189 (portreserve) score 1 or sacrifice child
<3>[ 223.787234] Out of memory: Kill process 3196 (rsyslogd) score 1 or sacrifice child
<3>[ 224.237119] Out of memory: Kill process 3728 (dbus-daemon) score 1 or sacrifice child
<3>[ 228.771770] Out of memory: Kill process 3758 (snmpd) score 1 or sacrifice child
<3>[ 229.033466] Out of memory: Kill process 3782 (xinetd) score 1 or sacrifice child
<3>[ 229.257710] Out of memory: Kill process 3782 (xinetd) score 1 or sacrifice child
<3>[ 229.484321] Out of memory: Kill process 3782 (xinetd) score 1 or sacrifice child
<3>[ 229.711169] Out of memory: Kill process 3782 (xinetd) score 1 or sacrifice child
<3>[ 229.934955] Out of memory: Kill process 3801 (cmproxyd) score 1 or sacrifice child
<3>[ 230.159542] Out of memory: Kill process 3812 (ntpd) score 1 or sacrifice child
<3>[ 230.382083] Out of memory: Kill process 3953 (master) score 1 or sacrifice child
<3>[ 230.606613] Out of memory: Kill process 3953 (master) score 1 or sacrifice child
<3>[ 230.829515] Out of memory: Kill process 3953 (master) score 1 or sacrifice child
<3>[ 230.832105] Out of memory: Kill process 3961 (crond) score 1 or sacrifice child
<3>[ 236.749746] Out of memory: Kill process 3974 (atd) score 1 or sacrifice child
<3>[ 236.969421] Out of memory: Kill process 4272 (dsm_sa_datamgrd) score 1 or sacrifice child
<3>[ 237.192102] Out of memory: Kill process 4492 (dsm_sa_datamgrd) score 1 or sacrifice child
<3>[ 237.746301] Out of memory: Kill process 4552 (dsm_sa_eventmgr) score 1 or sacrifice child
<3>[ 237.968308] Out of memory: Kill process 4613 (dsm_sa_snmpd) score 1 or sacrifice child
<3>[ 238.190550] Out of memory: Kill process 4614 (dsm_sa_snmpd) score 1 or sacrifice child
<3>[ 238.644020] Out of memory: Kill process 4643 (dsm_om_connsvcd) score 1 or sacrifice child
<3>[ 238.865658] Out of memory: Kill process 4643 (dsm_om_connsvcd) score 1 or sacrifice child
<3>[ 251.285450] Out of memory: Kill process 4643 (dsm_om_connsvcd) score 1 or sacrifice child
<3>[ 251.506601] Out of memory: Kill process 4800 (dsm_om_shrsvcd) score 1 or sacrifice child
<3>[ 251.727570] Out of memory: Kill process 4842 (cmcld) score 1 or sacrifice child
<3>[ 251.947085] Out of memory: Kill process 4842 (cmcld) score 1 or sacrifice child
<3>[ 252.167096] Out of memory: Kill process 4854 (cmlogd) score 1 or sacrifice child
<3>[ 252.384090] Out of memory: Kill process 4855 (cmfileassistd) score 1 or sacrifice child
<3>[ 252.603324] Out of memory: Kill process 4924 (mingetty) score 1 or sacrifice child
<3>[ 252.820757] Out of memory: Kill process 4926 (mingetty) score 1 or sacrifice child
<3>[ 253.037558] Out of memory: Kill process 4928 (mingetty) score 1 or sacrifice child
<3>[ 253.254908] Out of memory: Kill process 4930 (mingetty) score 1 or sacrifice child
<3>[ 253.257391] Out of memory: Kill process 4932 (mingetty) score 1 or sacrifice child
<3>[ 253.259357] Out of memory: Kill process 4934 (mingetty) score 1 or sacrifice child
<3>[ 253.261353] Out of memory: Kill process 5060 (sshd) score 1 or sacrifice child
<3>[ 253.263365] Out of memory: Kill process 5060 (sshd) score 1 or sacrifice child
<3>[ 253.264392] Out of memory: Kill process 5079 (bash) score 1 or sacrifice child
<3>[ 253.266352] Out of memory: Kill process 5257 (jnx_mlnxsnmp_da) score 1 or sacrifice child
<0>[ 253.529344] Kernel panic - not syncing: Out of memory and no killable processes...
sys output:
crash> sys
KERNEL: /usr/lib/debug/lib/modules/2.6.32-279.5.2.el6.x86_64/vmlinux
DUMPFILE: pcdata03.vmcore.flat [PARTIAL DUMP]
CPUS: 32
DATE: Wed Feb 6 02:11:52 2013
UPTIME: 00:04:12
LOAD AVERAGE: 3.03, 0.95, 0.34
TASKS: 578
NODENAME: ....
RELEASE: 2.6.32-279.5.2.el6.x86_64
VERSION: #1 SMP Fri Aug 24 01:07:11 UTC 2012
MACHINE: x86_64 (2700 Mhz)
MEMORY: 64 GB
PANIC: "[ 253.529344] Kernel panic - not syncing: Out of memory and no killable processes..."
kmem -z
crash> kmem -z
NODE: 0 ZONE: 0 ADDR: ffff88000000a0c0 NAME: "DMA"
SIZE: 4095 PRESENT: 3839 MIN/LOW/HIGH: 5/6/7
VM_STAT:
NR_FREE_PAGES: 3936
NR_INACTIVE_ANON: 0
NR_ACTIVE_ANON: 0
NR_INACTIVE_FILE: 0
NR_ACTIVE_FILE: 0
NR_UNEVICTABLE: 0
NR_MLOCK: 0
NR_ANON_PAGES: 0
NR_FILE_MAPPED: 0
NR_FILE_PAGES: 0
NR_FILE_DIRTY: 0
NR_WRITEBACK: 0
NR_SLAB_RECLAIMABLE: 0
NR_SLAB_UNRECLAIMABLE: 0
NR_PAGETABLE: 0
NR_KERNEL_STACK: 0
NR_UNSTABLE_NFS: 0
NR_BOUNCE: 0
NR_VMSCAN_WRITE: 0
NR_VMSCAN_IMMEDIATE: 0
NR_WRITEBACK_TEMP: 0
NR_ISOLATED_ANON: 0
NR_ISOLATED_FILE: 0
NR_SHMEM: 0
NUMA_HIT: 0
NUMA_MISS: 0
NUMA_FOREIGN: 0
NUMA_INTERLEAVE_HIT: 0
NUMA_LOCAL: 0
NUMA_OTHER: 0
NR_ANON_TRANSPARENT_HUGEPAGES: 0
NODE: 0 ZONE: 1 ADDR: ffff880000012780 NAME: "DMA32"
SIZE: 1044480 PRESENT: 756520 MIN/LOW/HIGH: 1030/1287/1545
VM_STAT:
NR_FREE_PAGES: 30117
NR_INACTIVE_ANON: 0
NR_ACTIVE_ANON: 0
NR_INACTIVE_FILE: 1
NR_ACTIVE_FILE: 0
NR_UNEVICTABLE: 0
NR_MLOCK: 0
NR_ANON_PAGES: 0
NR_FILE_MAPPED: 0
NR_FILE_PAGES: 1
NR_FILE_DIRTY: 0
NR_WRITEBACK: 0
NR_SLAB_RECLAIMABLE: 4
NR_SLAB_UNRECLAIMABLE: 4150
NR_PAGETABLE: 0
NR_KERNEL_STACK: 0
NR_UNSTABLE_NFS: 0
NR_BOUNCE: 0
NR_VMSCAN_WRITE: 0
NR_VMSCAN_IMMEDIATE: 0
NR_WRITEBACK_TEMP: 0
NR_ISOLATED_ANON: 0
NR_ISOLATED_FILE: 0
NR_SHMEM: 0
NUMA_HIT: 575606
NUMA_MISS: 3
NUMA_FOREIGN: 0
NUMA_INTERLEAVE_HIT: 0
NUMA_LOCAL: 575598
NUMA_OTHER: 11
NR_ANON_TRANSPARENT_HUGEPAGES: 0
NODE: 0 ZONE: 2 ADDR: ffff88000001ae40 NAME: "Normal"
SIZE: 7602176 PRESENT: 7498240 MIN/LOW/HIGH: 10217/12771/15325
VM_STAT:
NR_FREE_PAGES: 10443
NR_INACTIVE_ANON: 134
NR_ACTIVE_ANON: 197
NR_INACTIVE_FILE: -47
NR_ACTIVE_FILE: 42
NR_UNEVICTABLE: 0
NR_MLOCK: 0
NR_ANON_PAGES: 219
NR_FILE_MAPPED: 115
NR_FILE_PAGES: 45
NR_FILE_DIRTY: 0
NR_WRITEBACK: 0
NR_SLAB_RECLAIMABLE: 908
NR_SLAB_UNRECLAIMABLE: 18771
NR_PAGETABLE: 91
NR_KERNEL_STACK: 556
NR_UNSTABLE_NFS: 0
NR_BOUNCE: 0
NR_VMSCAN_WRITE: 0
NR_VMSCAN_IMMEDIATE: 0
NR_WRITEBACK_TEMP: 0
NR_ISOLATED_ANON: 0
NR_ISOLATED_FILE: 0
NR_SHMEM: 34
NUMA_HIT: 8243991
NUMA_MISS: 648
NUMA_FOREIGN: 4593726
NUMA_INTERLEAVE_HIT: 20066
NUMA_LOCAL: 8243829
NUMA_OTHER: 810
NR_ANON_TRANSPARENT_HUGEPAGES: 0
NODE: 0 ZONE: 3 ADDR: ffff880000023500 NAME: "Movable"
[unpopulated]
NODE: 1 ZONE: 0 ADDR: ffff880840000040 NAME: "DMA"
[unpopulated]
NODE: 1 ZONE: 1 ADDR: ffff880840008700 NAME: "DMA32"
[unpopulated]
NODE: 1 ZONE: 2 ADDR: ffff880840010dc0 NAME: "Normal"
SIZE: 8388608 PRESENT: 8273920 MIN/LOW/HIGH: 11274/14092/16911
VM_STAT:
NR_FREE_PAGES: 10114
NR_INACTIVE_ANON: 417
NR_ACTIVE_ANON: 83
NR_INACTIVE_FILE: 47
NR_ACTIVE_FILE: 32
NR_UNEVICTABLE: 0
NR_MLOCK: 0
NR_ANON_PAGES: 436
NR_FILE_MAPPED: 22
NR_FILE_PAGES: 154
NR_FILE_DIRTY: 0
NR_WRITEBACK: 0
NR_SLAB_RECLAIMABLE: 863
NR_SLAB_UNRECLAIMABLE: 21939
NR_PAGETABLE: 134
NR_KERNEL_STACK: 27
NR_UNSTABLE_NFS: 0
NR_BOUNCE: 0
NR_VMSCAN_WRITE: 3
NR_VMSCAN_IMMEDIATE: 5
NR_WRITEBACK_TEMP: 0
NR_ISOLATED_ANON: 0
NR_ISOLATED_FILE: 23
NR_SHMEM: 20
NUMA_HIT: 4332488
NUMA_MISS: 4593726
NUMA_FOREIGN: 665
NUMA_INTERLEAVE_HIT: 20007
NUMA_LOCAL: 4309300
NUMA_OTHER: 4616914
NR_ANON_TRANSPARENT_HUGEPAGES: 0
NODE: 1 ZONE: 3 ADDR: ffff880840019480 NAME: "Movable"
[unpopulated]
kmem -f
crash> kmem -f
NODE
0
ZONE NAME SIZE FREE MEM_MAP START_PADDR START_MAPNR
0 DMA 4095 3936 ffffea0000000038 1000 0
AREA SIZE FREE_AREA_STRUCT BLOCKS PAGES
0 4k ffff880000012128 2 2
0 4k ffff880000012138 0 0
0 4k ffff880000012148 0 0
0 4k ffff880000012158 0 0
0 4k ffff880000012168 0 0
1 8k ffff880000012180 1 2
1 8k ffff880000012190 0 0
1 8k ffff8800000121a0 0 0
1 8k ffff8800000121b0 0 0
1 8k ffff8800000121c0 0 0
2 16k ffff8800000121d8 1 4
2 16k ffff8800000121e8 0 0
2 16k ffff8800000121f8 0 0
2 16k ffff880000012208 0 0
2 16k ffff880000012218 0 0
3 32k ffff880000012230 1 8
3 32k ffff880000012240 0 0
3 32k ffff880000012250 0 0
3 32k ffff880000012260 0 0
3 32k ffff880000012270 0 0
4 64k ffff880000012288 1 16
4 64k ffff880000012298 0 0
4 64k ffff8800000122a8 0 0
4 64k ffff8800000122b8 0 0
4 64k ffff8800000122c8 0 0
5 128k ffff8800000122e0 0 0
5 128k ffff8800000122f0 0 0
5 128k ffff880000012300 0 0
5 128k ffff880000012310 0 0
5 128k ffff880000012320 0 0
6 256k ffff880000012338 1 64
6 256k ffff880000012348 0 0
6 256k ffff880000012358 0 0
6 256k ffff880000012368 0 0
6 256k ffff880000012378 0 0
7 512k ffff880000012390 0 0
7 512k ffff8800000123a0 0 0
7 512k ffff8800000123b0 0 0
7 512k ffff8800000123c0 0 0
7 512k ffff8800000123d0 0 0
8 1024k ffff8800000123e8 1 256
8 1024k ffff8800000123f8 0 0
8 1024k ffff880000012408 0 0
8 1024k ffff880000012418 0 0
8 1024k ffff880000012428 0 0
9 2048k ffff880000012440 0 0
9 2048k ffff880000012450 0 0
9 2048k ffff880000012460 0 0
9 2048k ffff880000012470 1 512
9 2048k ffff880000012480 0 0
10 4096k ffff880000012498 0 0
10 4096k ffff8800000124a8 0 0
10 4096k ffff8800000124b8 3 3072
10 4096k ffff8800000124c8 0 0
10 4096k ffff8800000124d8 0 0
ZONE NAME SIZE FREE MEM_MAP START_PADDR START_MAPNR
1 DMA32 1044480 30117 ffffea0000038000 1000000 4095
AREA SIZE FREE_AREA_STRUCT BLOCKS PAGES
0 4k ffff88000001a7e8 24 24
0 4k ffff88000001a7f8 4 4
0 4k ffff88000001a808 13 13
0 4k ffff88000001a818 0 0
0 4k ffff88000001a828 0 0
1 8k ffff88000001a840 2 4
1 8k ffff88000001a850 2 4
1 8k ffff88000001a860 4 8
1 8k ffff88000001a870 0 0
1 8k ffff88000001a880 0 0
2 16k ffff88000001a898 0 0
2 16k ffff88000001a8a8 3 12
2 16k ffff88000001a8b8 4 16
2 16k ffff88000001a8c8 0 0
2 16k ffff88000001a8d8 0 0
3 32k ffff88000001a8f0 0 0
3 32k ffff88000001a900 3 24
3 32k ffff88000001a910 3 24
3 32k ffff88000001a920 0 0
3 32k ffff88000001a930 0 0
4 64k ffff88000001a948 1 16
4 64k ffff88000001a958 3 48
4 64k ffff88000001a968 6 96
4 64k ffff88000001a978 0 0
4 64k ffff88000001a988 0 0
5 128k ffff88000001a9a0 0 0
5 128k ffff88000001a9b0 3 96
5 128k ffff88000001a9c0 7 224
5 128k ffff88000001a9d0 0 0
5 128k ffff88000001a9e0 0 0
6 256k ffff88000001a9f8 0 0
6 256k ffff88000001aa08 1 64
6 256k ffff88000001aa18 6 384
6 256k ffff88000001aa28 0 0
6 256k ffff88000001aa38 0 0
7 512k ffff88000001aa50 1 128
7 512k ffff88000001aa60 0 0
7 512k ffff88000001aa70 8 1024
7 512k ffff88000001aa80 0 0
7 512k ffff88000001aa90 0 0
8 1024k ffff88000001aaa8 1 256
8 1024k ffff88000001aab8 1 256
8 1024k ffff88000001aac8 5 1280
8 1024k ffff88000001aad8 0 0
8 1024k ffff88000001aae8 0 0
9 2048k ffff88000001ab00 0 0
9 2048k ffff88000001ab10 1 512
9 2048k ffff88000001ab20 3 1536
9 2048k ffff88000001ab30 1 512
9 2048k ffff88000001ab40 0 0
10 4096k ffff88000001ab58 0 0
10 4096k ffff88000001ab68 0 0
10 4096k ffff88000001ab78 22 22528
10 4096k ffff88000001ab88 1 1024
10 4096k ffff88000001ab98 0 0
ZONE NAME SIZE FREE MEM_MAP START_PADDR START_MAPNR
2 Normal 7602176 10443 ffffea0003800000 100000000 1048575
AREA SIZE FREE_AREA_STRUCT BLOCKS PAGES
0 4k ffff880000022ea8 365 365
0 4k ffff880000022eb8 274 274
0 4k ffff880000022ec8 274 274
0 4k ffff880000022ed8 0 0
0 4k ffff880000022ee8 0 0
1 8k ffff880000022f00 99 198
1 8k ffff880000022f10 94 188
1 8k ffff880000022f20 360 720
1 8k ffff880000022f30 0 0
1 8k ffff880000022f40 0 0
2 16k ffff880000022f58 30 120
2 16k ffff880000022f68 41 164
2 16k ffff880000022f78 204 816
2 16k ffff880000022f88 0 0
2 16k ffff880000022f98 0 0
3 32k ffff880000022fb0 9 72
3 32k ffff880000022fc0 19 152
3 32k ffff880000022fd0 138 1104
3 32k ffff880000022fe0 0 0
3 32k ffff880000022ff0 0 0
4 64k ffff880000023008 7 112
4 64k ffff880000023018 4 64
4 64k ffff880000023028 77 1232
4 64k ffff880000023038 0 0
4 64k ffff880000023048 0 0
5 128k ffff880000023060 3 96
5 128k ffff880000023070 3 96
5 128k ffff880000023080 43 1376
5 128k ffff880000023090 0 0
5 128k ffff8800000230a0 0 0
6 256k ffff8800000230b8 0 0
6 256k ffff8800000230c8 0 0
6 256k ffff8800000230d8 13 832
6 256k ffff8800000230e8 0 0
6 256k ffff8800000230f8 0 0
7 512k ffff880000023110 0 0
7 512k ffff880000023120 0 0
7 512k ffff880000023130 5 640
7 512k ffff880000023140 0 0
7 512k ffff880000023150 0 0
8 1024k ffff880000023168 0 0
8 1024k ffff880000023178 0 0
8 1024k ffff880000023188 0 0
8 1024k ffff880000023198 0 0
8 1024k ffff8800000231a8 0 0
9 2048k ffff8800000231c0 0 0
9 2048k ffff8800000231d0 0 0
9 2048k ffff8800000231e0 1 512
9 2048k ffff8800000231f0 0 0
9 2048k ffff880000023200 0 0
10 4096k ffff880000023218 0 0
10 4096k ffff880000023228 0 0
10 4096k ffff880000023238 0 0
10 4096k ffff880000023248 1 1024
10 4096k ffff880000023258 0 0
ZONE NAME SIZE FREE MEM_MAP START_PADDR START_MAPNR
3 Movable 0 0 0 0 0
--------------------------------------------------------------------------
NODE
1
ZONE NAME SIZE FREE MEM_MAP START_PADDR START_MAPNR
0 DMA 0 0 0 0 0
ZONE NAME SIZE FREE MEM_MAP START_PADDR START_MAPNR
1 DMA32 0 0 0 0 0
ZONE NAME SIZE FREE MEM_MAP START_PADDR START_MAPNR
2 Normal 8388608 10114 ffffea001ce00000 840000000 0
AREA SIZE FREE_AREA_STRUCT BLOCKS PAGES
0 4k ffff880840018e28 405 405
0 4k ffff880840018e38 162 162
0 4k ffff880840018e48 317 317
0 4k ffff880840018e58 0 0
0 4k ffff880840018e68 0 0
1 8k ffff880840018e80 106 212
1 8k ffff880840018e90 70 140
1 8k ffff880840018ea0 269 538
1 8k ffff880840018eb0 0 0
1 8k ffff880840018ec0 0 0
2 16k ffff880840018ed8 24 96
2 16k ffff880840018ee8 18 72
2 16k ffff880840018ef8 207 828
2 16k ffff880840018f08 0 0
2 16k ffff880840018f18 0 0
3 32k ffff880840018f30 20 160
3 32k ffff880840018f40 4 32
3 32k ffff880840018f50 148 1184
3 32k ffff880840018f60 0 0
3 32k ffff880840018f70 0 0
4 64k ffff880840018f88 17 272
4 64k ffff880840018f98 2 32
4 64k ffff880840018fa8 95 1520
4 64k ffff880840018fb8 0 0
4 64k ffff880840018fc8 0 0
5 128k ffff880840018fe0 4 128
5 128k ffff880840018ff0 1 32
5 128k ffff880840019000 37 1184
5 128k ffff880840019010 0 0
5 128k ffff880840019020 0 0
6 256k ffff880840019038 0 0
6 256k ffff880840019048 0 0
6 256k ffff880840019058 8 512
6 256k ffff880840019068 0 0
6 256k ffff880840019078 0 0
7 512k ffff880840019090 0 0
7 512k ffff8808400190a0 0 0
7 512k ffff8808400190b0 1 128
7 512k ffff8808400190c0 0 0
7 512k ffff8808400190d0 0 0
8 1024k ffff8808400190e8 0 0
8 1024k ffff8808400190f8 0 0
8 1024k ffff880840019108 1 256
8 1024k ffff880840019118 0 0
8 1024k ffff880840019128 0 0
9 2048k ffff880840019140 0 0
9 2048k ffff880840019150 0 0
9 2048k ffff880840019160 1 512
9 2048k ffff880840019170 1 512
9 2048k ffff880840019180 0 0
10 4096k ffff880840019198 0 0
10 4096k ffff8808400191a8 0 0
10 4096k ffff8808400191b8 0 0
10 4096k ffff8808400191c8 1 1024
10 4096k ffff8808400191d8 0 0
ZONE NAME SIZE FREE MEM_MAP START_PADDR START_MAPNR
3 Movable 0 0 0 0 0
nr_free_pages: 54610 (found 54742)
A little exercise with foreach bt
crash> foreach bt | awk '$1 == "#0" { $2 = ""; print }' | sort | uniq -c
31 #0 crash_nmi_callback at ffffffff81029df6
1 #0 machine_kexec at ffffffff8103281b
546 #0 schedule at ffffffff814fda62
Indeed, they are either crashed or waiting for memory (or I am not reading it correctly).
To check top 20 largest consumer of physical memory (resident set size).
To check the number of hugepages.
Update:
A) crash dump was captured from following kernel version.
B) Lets extract the vmlinux file from kernel-debug-debuginfo package.
C) Open vmcore file using crash utility.
D) System Information.
a) Panic happened due to Out of memory but "panic_on_oom" parameter is disabled on the system.
This parameter enables or disables panic on out-of-memory feature. If this is set to 0, the kernel will kill some rogue process, called oom_killer. Usually, oom_killer can kill rogue processes and system will survive. If this is set to 1, the kernel panics when out-of-memory happens.
b) So, how did we capture the vmcore at the time of oom event?
Well let's check the mm/oom_kill.c source code. It says that if nothing is left on the system to kill then simply hang or panic.
So we reached to panic state and as kdump service was enabled on this system vmcore was captured.
E) Lets check in kernel ring buffer,
F) Lets check the memory status on the system at the time of crash.
G) The total memory allocated to process(es) is 0.00391006GiB.
H) The application process(es) are not utilizing memory on the system.
I) Let's verify the memory tuning parameters on the system.
This value contains a flag that enables memory overcommitment. When this flag is 0, the kernel attempts to estimate the amount of free memory left when userspace requests more memory.
When overcommit_memory is set to 2, the committed address space is not permitted to exceed swap plus this percentage of physical RAM.
Zone_reclaim_mode allows someone to set more or less aggressive approaches to reclaim memory when a zone runs out of memory. If it is set to zero then no zone reclaim occurs.
The minimum number of kilobytes to keep free across the system. This value is used to compute a watermark value for each low memory zone, which are then assigned a number of reserved free pages proportional to their size. When setting this parameter, as both too-low and too-high values can be damaging.
In other words, setting
min_free_kbytes
too low prevents the system from reclaiming memory. This can result in system hangs and OOM-killing multiple processes. However, setting this parameter to a value that is too high (5-10% of total system memory) will cause your system to become out-of-memory immediately. Linux is designed to use all available RAM to cache file system data. Setting a high min_free_kbytes value results in the system spending too much time reclaiming memory.The values of above parameters looks okay, So where is my memory ???
Assumptions:
Can you run this command.
Also, kmem -z and kmem -f might help.
However, see that the entire swap space is consumed.
You must have got some OOM messages in the log command. Can you pastebin the OOM messages. I can see the mode and race condition too.
On other note, a sys output will help immensely. You know, memory leak and kernel upgrades are almost synonymous ;)
EDIT: Try this yourself. foreach bt
It will show the trace for each PID. Look for any common pattern, could be all of them are waiting for the
schedule_at
function. Meaning something waiting for memory allocation.Now, note the value of TASK in the section which shows each PID in foreach bt.
Run this.
If they are from same PID, the processes you are seeing are threaded.
I forgot how to check for the overcommit value, I will see if I can find out.
I do not have much experrience with analysing crash dump and therfore not able to help you with some specific advice, but here are some links which I have collected and used from time to time. Maybe you will find there something usefull:
You may know this and have simply not indicated this is what you did, but, there's lots of options to ps.
or
as root will spit much more detailed information.
There's a great deal of helpful suggestions on this page to help track down memory issues:
http://www.linuxnix.com/2011/05/find-ram-utilization-user-linux.html
I'd check
periodically to see how much memory is actually being used. (Or better yet, have it graphed; graphite or munin are great for this) so that you can visualize when/how your memory is being used.
64 gigs of ram is quite a bit; what sort of work does the host do?