I am a user of a Linux-based computing cluster that runs a task queuing system PBSPro
. PBSPro
likes to know how much RAM should be available for a task to make sure it will execute correctly; for example by qsub -l select=1:mem=4GB someapp
I declare that I want to submit someapp
for execution on a node that has at least 4GB of free memory. The tighter bound I can provide, the faster my application will be scheduled for execution.
How can I estimate how much memory will someapp
need?
I can make a test run and watch htop
, carefully watching my process' RSS, but is there any tool or method that would make it more automatic? Let say, return the maximum amount of memory paged in to the process over the whole time of its execution?
I am just a normal user on the cluster, with no root access. I am only running a single process with potentially several threads; even if someapp
will fork()
, I don't care whether the child process memory is counted or not.
GNU time has a -v option that includes maximum RSS. Not the shell built in, so
/usr/bin/time -v
The reality is more complicated then that. However, you probably don't want to bother with a detailed map parsing a la
pmap
nor need the detail provided by profiling like withvalgrind
.Or, you know some size task, say 4 GB, will be scheduled quickly and deal with failure if that is too little.