This is related to : Out of memory at 72% usage
It looks to be the same problem but the question is slightly different : Where my memory goes ? I have 18% memory usage and my OOM Killer is killing mysqld each 10 minutes.
I was able to gather some informations :
1 - Thanks to https://serverfault.com/a/619681/182343 I found that the report of OOM Killer show that DMA35 + DMA + Normal usage are at 96% (the report https://pastebin.com/UJUiSsSi) ... so there is a problem ...
2 - The processlist from OOM Killer : https://pastebin.com/yYTD4QzW
3 - The free, top, htop and other tools are showing me 18% ram usage at maximum. Here is top sorting ram usage (https://pastebin.com/DEDV1HWb)
4 - free -m tells nothing about the ram problem :
total used free shared buff/cache
available
Mem: 6809 414 470 201 5924 5825
(I have added some swap as i had no swap on this virtual machine but nothing changed, no swap is used)
5 (EDIT) : Thanks to Daniel Gordi i cleanup my buff/cache free && sync && echo 3 > /proc/sys/vm/drop_caches && free
and ran oom-killer manually with echo f > /proc/sysrq-trigger
. And, WTF, the oom-killer ram report (DMA35 + DMA + Normal) shows my expected ram usage : 18% ! I always thought that buff/cache
means available when the OS needs-it...
Why and where the ram is eaten ?
(I really hope i could have some help in there as my production server is really instable since this problem it appears :( Thanks )
Try to find which process using your RAM with
ps aux --sort -rss
.Regards to server's output for
free -m
most of RAM got buffered/cached. Try to clear caches with these command :# free && sync && echo 3 > /proc/sys/vm/drop_caches && free
In case someone comes here for a solution, this is an update :
I rolledback all the config modifications and I did a fresh reboot of the server. Since 2 months the server looks good and the problem dissapeared.
Not sure what happened here ...