I have a java application that runs on a Linux server with physical memory(RAM) allocated as 12GB where I would see the normal utilization over a period of time as below.
sys> free -h
total used free shared buff/cache available
Mem: 11G 7.8G 1.6G 9.0M 2.2G 3.5G
Swap: 0B 0B 0B
Recently on increasing the load of the application, I could see the RAM utilization is almost full, and available space is very less where I could face some slowness but still application continues to work fine.
sys> free -h
total used free shared buff/cache available
Mem: 11G 11G 134M 17M 411M 240M
Swap: 0B 0B 0B
sys> free -h
total used free shared buff/cache available
Mem: 11G 11G 145M 25M 373M 204M
Swap: 0B 0B 0B
I referred to https://www.linuxatemyram.com/ where it suggested the below point.
Warning signs of a genuine low memory situation that you may want to look into:
- available memory (or "free + buffers/cache") is close to zero
- swap used increases or fluctuates.
- dmesg | grep oom-killer shows the OutOfMemory-killer at work
From the above points, I don't see any OOM issue at the application level and the swap was also disabled. so neglecting the two points. One point which troubles me was available memory is less than zero where I need a clarification
Questions:
- In case available is close to 0, will it end up in a System crash?
- Does it mean I need to upgrade the RAM when the available memory goes less?
- On what basis the RAM memory should be allocated/increased?
- Do we have any official recommendations/guidelines that need to follow for RAM memory allocation?
Able to get an answer for one my question
On testing in one of my servers, where I loaded the memory with almost full as below
Able to see my application alone (which consumed more memory) got killed by the Out of memory killer which can be referred in kernel logs
dmesg -e
https://www.kernel.org/doc/gorman/html/understand/understand016.html