- What is your experience? Can you confirm my experimental findings?
- Can I generally use
total RAM - 600 MB
, or0.4 * total RAM
? - Or is it always trial and error, and hoping that it is low enough?
Context: I'm trying to set up jenkins on a T3 instance, experimenting with Ubuntu Server 16.04 and 18.04.
I started with a t3.micro
instance (1 GB RAM), but found the OOM killer killing my java process, as soon as I use more than about -Xmx400m
, which seems kind of low. I was expecting to be able to use more like -Xmx750m
.
Does this mean Ubuntu Server requires about 600 MB to work?
The problem is that the java process starts, even if I set both -Xms and -Xmx to a very high value, like 700m
. The process is killed only later when I make the first request to the website.
I now experiment with a t3.small
instance (2 GB of RAM), but am again very unsure about what to configure.
On Windows it is kind of deterministic: I set both -Xms
and -Xmx
to the same value. If the service fails to start, the value was too high. If the service starts successfully, the value is fine, and the memory is reserved for my process.
Some background:
- https://plumbr.io/blog/memory-leaks/why-does-my-java-process-consume-more-memory-than-xmx
- https://support.cloudbees.com/hc/en-us/articles/115002718531-Memory-Problem-Process-killed-by-OOM-Killer
- Avoid linux out-of-memory application teardown
- Effects of configuring vm.overcommit_memory
- PostgreSQL seems to have a similar problem: https://dba.stackexchange.com/questions/170596/permanently-disable-oom-killer-for-postgres (and https://www.postgresql.org/docs/current/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT)
If you want to run any application on Linux in a limited resource environment you should make sure that you understand how memory on Linux actually works. Sounds harsh, but it really is the only way to understand what is happening behind the curtain.
If the OOM killer gets triggered at all it means you ran out of memory in the first place. If that happens with -Xmx400 then this is the limit for that specific application in that specific environment doing the specific thing it does.
Before you even think about setting vm.overcommit_memory to a non-default value, make sure that you really understand the difference between allocating and using memory (in terms of the linux virtual memory system). Otherwise you will just make sure that you'll never be able to use the system's memory efficiently.
Having said all that: the defaults are usually good in 80%-90% of the use-cases. For Jenkins the defaults in the kernel should actually work rather good. You will want the kernel to overcommit the available memory (which is the default) and make it trigger the OOM when it actually runs out (also the default).
If that doesn't work with the memory you have available, you'll need more memory. Also setting larger -Xmx values that are not required by the application is considered harmful, but you asked about Linux memory, not Java memory :)