I have a server that is now having "invoked oom-killer" many times a day ! I never saw that and even after reading tons of forums on Internet I'm still puzzled : is it a big problem ? Why is it happening ? And what is the cause of that ? Should I avoid it ? And how ?
When it happens it can happen for 30 minutes with 1000+ logs lines like this :
May 9 08:12:41 myserver kernel: postmaster invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0
May 9 08:12:41 myserver kernel:
May 9 08:12:41 myserver kernel: Call Trace:
May 9 08:12:41 myserver kernel: [<ffffffff800c9d70>] out_of_memory+0x8e/0x2f3
May 9 08:12:41 myserver kernel: [<ffffffff8000f677>] __alloc_pages+0x27f/0x308
May 9 08:12:41 myserver kernel: [<ffffffff80013034>] __do_page_cache_readahead+0x96/0x17b
May 9 08:12:41 myserver kernel: [<ffffffff80013971>] filemap_nopage+0x14c/0x360
May 9 08:12:41 myserver kernel: [<ffffffff8000896c>] __handle_mm_fault+0x1fd/0x103b
May 9 08:12:41 myserver kernel: [<ffffffff80018415>] do_sync_write+0xc7/0x104
May 9 08:12:41 myserver kernel: [<ffffffff8002dfc7>] __wake_up+0x38/0x4f
May 9 08:12:41 myserver kernel: [<ffffffff800671f2>] do_page_fault+0x499/0x842
May 9 08:12:41 myserver kernel: [<ffffffff800a2e52>] autoremove_wake_function+0x0/0x2e
May 9 08:12:42 myserver kernel: [<ffffffff8005dde9>] error_exit+0x0/0x84
May 9 08:12:42 myserver kernel:
May 9 08:12:42 myserver kernel: Mem-info:
May 9 08:12:42 myserver kernel: Node 0 DMA per-cpu:
May 9 08:12:42 myserver kernel: cpu 0 hot: high 0, batch 1 used:0
May 9 08:12:42 myserver kernel: cpu 0 cold: high 0, batch 1 used:0
May 9 08:12:42 myserver kernel: cpu 1 hot: high 0, batch 1 used:0
May 9 08:12:42 myserver kernel: cpu 1 cold: high 0, batch 1 used:0
May 9 08:12:42 myserver kernel: Node 0 DMA32 per-cpu:
May 9 08:12:42 myserver kernel: cpu 0 hot: high 186, batch 31 used:30
May 9 08:12:42 myserver kernel: cpu 0 cold: high 62, batch 15 used:54
May 9 08:12:42 myserver kernel: cpu 1 hot: high 186, batch 31 used:18
May 9 08:12:42 myserver kernel: cpu 1 cold: high 62, batch 15 used:56
May 9 08:12:42 myserver kernel: Node 0 Normal per-cpu:
May 9 08:12:42 myserver kernel: cpu 0 hot: high 186, batch 31 used:50
May 9 08:12:42 myserver kernel: cpu 0 cold: high 62, batch 15 used:17
May 9 08:12:42 myserver kernel: cpu 1 hot: high 186, batch 31 used:28
May 9 08:12:42 myserver kernel: cpu 1 cold: high 62, batch 15 used:48
May 9 08:12:42 myserver kernel: Node 0 HighMem per-cpu: empty
May 9 08:12:43 myserver kernel: Free pages: 21156kB (0kB HighMem)
May 9 08:12:43 myserver kernel: Active:507857 inactive:477567 dirty:0 writeback:0 unstable:0 free:5289 slab:4642 mapped-file:1087 mapped-anon:984209 pagetables:8234
May 9 08:12:43 myserver kernel: Node 0 DMA free:10120kB min:16kB low:20kB high:24kB active:0kB inactive:0kB present:9724kB pages_scanned:0 all_unreclaimable? yes
May 9 08:12:43 myserver kernel: lowmem_reserve[]: 0 3250 4008 4008
May 9 08:12:43 myserver kernel: Node 0 DMA32 free:9548kB min:6560kB low:8200kB high:9840kB active:1623104kB inactive:1648888kB present:3328864kB pages_scanned:5466591 all_unreclaimable? yes
May 9 08:12:43 myserver kernel: lowmem_reserve[]: 0 0 757 757
May 9 08:12:43 myserver kernel: Node 0 Normal free:1488kB min:1528kB low:1908kB high:2292kB active:408324kB inactive:261380kB present:775680kB pages_scanned:1657520 all_unreclaimable? yes
May 9 08:12:43 myserver kernel: lowmem_reserve[]: 0 0 0 0
May 9 08:12:43 myserver kernel: Node 0 HighMem free:0kB min:128kB low:128kB high:128kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
May 9 08:12:43 myserver kernel: lowmem_reserve[]: 0 0 0 0
May 9 08:12:43 myserver kernel: Node 0 DMA: 4*4kB 5*8kB 3*16kB 3*32kB 5*64kB 3*128kB 0*256kB 0*512kB 1*1024kB 0*2048kB 2*4096kB = 10120kB
May 9 08:12:43 myserver kernel: Node 0 DMA32: 13*4kB 5*8kB 1*16kB 1*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 0*2048kB 2*4096kB = 9548kB
May 9 08:12:43 myserver kernel: Node 0 Normal: 16*4kB 4*8kB 1*16kB 1*32kB 5*64kB 0*128kB 0*256kB 0*512kB 1*1024kB 0*2048kB 0*4096kB = 1488kB
May 9 08:12:43 myserver kernel: Node 0 HighMem: empty
May 9 08:12:43 myserver kernel: 2313 pagecache pages
May 9 08:12:43 myserver kernel: Swap cache: add 105602811, delete 105602345, find 70994956/74685517, race 3758+718169
May 9 08:12:43 myserver kernel: Free swap = 0kB
May 9 08:12:43 myserver kernel: Total swap = 4192956kB
May 9 08:12:43 myserver kernel: Free swap: 0kB
May 9 08:12:43 myserver kernel: 1245184 pages of RAM
May 9 08:12:43 myserver kernel: 234724 reserved pages
May 9 08:12:44 myserver kernel: 12118 pages shared
May 9 08:12:44 myserver kernel: 601 pages swap cached
May 9 08:12:44 myserver kernel: Out of memory: Killed process 13121, UID 48, (httpd).
Probably yes.
Because your memory needs are larger than your available memory + swap.
The applications running on the system are the likely cause.
Yes, you should avoid the OOM killer.
Add more memory. Failing that, configure swap.
http://linux-mm.org/OOM_Killer