I use my computer for scientific programming. It has a healthy 8GB
of RAM and 12GB
of swap space. Often, as my problems have gotten larger, I exceed all of the available RAM. Rather than crashing (which would be preferred), it seems Ubuntu starts loading everything into swap, including Unity and any open terminals. If I don't catch a run-away program in time, there is nothing I can do but wait - it takes 4-5 minutes to switch to a command prompt eg. Ctrl-Alt-F2
where I can kill the offending process.
Since my own stupidity is out of scope of this forum, how can I prevent Ubuntu from crashing via thrashing when I use up all of the available memory from a single offending program?
At-home experiment*!
Open a terminal, launch python
and if you have numpy
installed try this:
>>> import numpy
>>> [numpy.zeros((10**4, 10**4)) for _ in xrange(50)]
* Warning: may have adverse effects, monitor the process via iotop
or top
to kill it in time. If not, I'll see you after your reboot.
The shell built-in
ulimit
allows you to restrict resources. For your case, to limit memory use in the shell (and its children), useulimit -v
.Demonstration setting a memory limit of 100 MB (100000 KB):
It's observed using
ps uww -C script-name-here
that python requires at least 29MB of memory (VSZ column). The RSS limit grows as your python script needs more memory so adapt that column.Cgroups should let you limit your memory usage on a per process basis.
https://en.wikipedia.org/wiki/Cgroups
http://www.mjmwired.net/kernel/Documentation/cgroups/memory.txt
Scientific computing is notoriously memory intensive, by sandboxing your app in a cgroup, the rest of the processes should not become victims as memory pressure will be alleviated.
Alternatively, a VM could be used as a sort of hard limit as the app can only use the memory delegated to the virtual machine, at the expense of performance of course. However a VM is much easier to configure for the uninitiated when compared to setting up and maintaining a cgroup.
Decisions decisions :) Good luck!