the server is running out of memory and get to the point where it starts killing the process, the total PSS memory(actual memory used from the Resident memory) consumed by top using applications is less than the total memory on the system, i want to find out where this extra memory usage is happening? any ideas, below are the output from meminfo,smem,free -m,
any suggestions will be really appreciated???
cat /proc/meminfo
MemTotal: 5976008 kB
MemFree: 138768 kB
Buffers: 2292 kB
Cached: 57444 kB
SwapCached: 85980 kB
Active: 324332 kB
Inactive: 121836 kB
Active(anon): 309264 kB
Inactive(anon): 77992 kB
Active(file): 15068 kB
Inactive(file): 43844 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 8159224 kB
SwapFree: 6836184 kB
Dirty: 572 kB
Writeback: 0 kB
AnonPages: 372160 kB
Mapped: 13976 kB
Shmem: 472 kB
Slab: 328216 kB
SReclaimable: 92544 kB
SUnreclaim: 235672 kB
KernelStack: 4824 kB
PageTables: 14732 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 8159224 kB
Committed_AS: 4940480 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 102424 kB
VmallocChunk: 34359584392 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 6384 kB
DirectMap2M: 2080768 kB
DirectMap1G: 4194304 kB
SMEM usage:
30971 root python /usr/local/scripts/s 2432 660 860 1204
23296 root /usr/bin/spamd -d -c -m5 -H 58296 1460 1564 1868
2763 ufc csrv -c /home/ufc/ufclient/ 116000 12768 12792 13084
55819 root /usr/bin/python /bin/smem 0 22356 22988 24364
2101 root clamd 189228 41224 41280 41700
32914 root /opt/safesquid/safesquid/sa 831120 5808 138619 271844
[root@server sysadmin]# free -m
total used free shared buffers cached
Mem: 5835 5695 140 0 1 19
-/+ buffers/cache: 5674 161
Swap: 7967 1315 6652
UPDATE:
The server is back normal now, but the memory usage is exponential and it keeps going up until after 7 hours the application gets killed
Out of memory: Kill process 14585 (safesquid) score 81 or sacrifice child
Killed process 16141, UID 500, (python) total-vm:79284kB, anon-rss:2656kB, file-rss:680kB
top - 21:58:16 up 16 days, 11:10, 1 user, load average: 0.46, 0.74,
0.78 Tasks: 243 total, 1 running, 242 sleeping, 0 stopped, 0 zombie Cpu(s): 5.7%us, 5.8%sy, 0.0%ni, 88.3%id, 0.1%wa, 0.0%hi,
0.1%si, 0.0%st Mem: 5976008k total, 5830648k used, 145360k free, 35724k buffers Swap: 8159224k total, 445384k used, 7713840k free, 3684540k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4960 ssquid 20 0 1000m 534m 3068 S 20.6 9.2 90:19.40 safesquid
2101 clamav 20 0 4153m 85m 1672 S 2.0 1.5 536:42.26 clamd
23333 root 20 0 244m 50m 1940 S 0.0 0.9 2:10.84 spamd
2763 ufc 20 0 1628m 32m 25m S 1.0 0.5 399:12.74 csrv
61303 root 20 0 97876 4380 3304 S 0.0 0.1 0:00.28 sshd
23296 root 20 0 227m 3424 928 S 0.0 0.1 0:07.87 spamd
the box is running rulespace,clam and safesquid proxy.
In the memory graph, the big drop is when the application got killed, and i restarted the safesquid service...
@David Schwartz: I am pretty sure the kernel OOM killer kills the process. And yes we need to know which process is being killed.
I am pretty sure the process that's being killed is misbehaving in some way (or crashing), as a result it's using up most available memory at which point the kernel's OOM killer decides to finish it off. For example, this kind of behaviour was rampant (in my case) a decade or so ago when mozilla/firefox was more prone to leaking memory than it is now. It'd just use more and more and suddenly it just disappeared... you get the idea.
Well, here's the breakdown:
Whatever process is being killed probably has a memory leak. Your graph certainly makes that look like the case. You should focus on the purple line more than the yellow. The yellow is actually also free memory.
As for processes not using the total memory, it's unclear what you're saying as you haven't told me how much is in the machine. However, a certain amount of memory is always used by the kernel and things like the page table, so your full hardware memory in place isn't available to the the applications.
In your case, you have 5.7G of total memory show and that means you probably have 6G installed. A very exhaustive explanation of meminfo might help you out, but summary is your big drop comes from a memory leaking app that needs to be fixed or at least regularly restarted.