I have a closed source software with some memory leaks problems. Is there a tool or solution to "sandboxe" processes in a fixed amount of memory without using "ulimit" (to generic, I need a per-application memory control)
I have a closed source software with some memory leaks problems. Is there a tool or solution to "sandboxe" processes in a fixed amount of memory without using "ulimit" (to generic, I need a per-application memory control)
On systemd-based distros you can also use systemd-run (which indirectly uses cgroups). For example:
Note: this gonna ask you for a password but the app gets launched as your user. Do not allow this to mislead you into thinking that the command needs to run with
sudo
, because that would cause the command to run under root, which hardly was your intention.If you want to not enter the password (indeed, why would you need a password to limit memory you already own), you could use
--user
option, however for this to work you gonna need cgroupsv2 support enabled, which right now requires to boot withsystemd.unified_cgroup_hierarchy
kernel parameter.'ulimit' is a 'per-application' control… per process in fact. The ulimit shell command is a shell built-in setting the limit for the shell process and its children. Put the 'ulimit' command in the script starting your application and the limit will be set for this application only.
You could use a process management daemon like
monit
to monitor the amount of memory in use by your proceses and restart it when it grows over your defined limit.This does sound drastic, but given your application is known to leak, restarting it regularly based on its usage is just putting off the inevitable, when the process size grows larger than the smaller of, your machines physical memory, or any address space limitations imposed by your operating system.
Use cgroups. https://man7.org/linux/man-pages/man7/cgroups.7.html
Note the difference between
memory.limit_in_bytes
andmemory.memsw.limit_in_bytes
. Also notememory.soft_limit_in_bytes
.