I'd like to have one line of shell/bash that does something along these lines:
test "`free | grep | awk | whatever` -gt 80" && any_command
Where the total percentage of ram being used by the whole system is compared against a hardcoded number (80 in my case). test
doesn't matter as long as any_command
executes if ram is higher than given percentage.
- exact bytes/megabytes instead of percentages is ok
- this should work on typical ubuntu 14.04
- intended for usage as a cron job
- bonus: one-liner which does the same thing, but checks ram for a specific process
Update
There are answers on how this is a problem that the likes of monit/bluepill/god are built to solve. I agree 100%, and you should most likely follow advice in those answers. However, this question is specifically about the exact line I described, for whatever reasons that might be, assuming all the caveats and problems this might involve.
How about:
And for the process consumption, here is a possible part of a solution:
Combining both of them is left as an exercise to the reader.
Don't reinvent the wheel :)
The Monit utility is purpose-built to handle this sort of situation. It's well-documented and has plenty of examples here on ServerFault.
or for a process:
Instead of an alert or start/stop/restart action, you can configure an EXEC:
What exactly are you trying to accomplish? You're probably trying to do it WRONG.
But note that whatever you are trying to accomplish with that is almost certainly useless (and probably even harmful). There is no such thing as "percentage RAM being used". Yes,
free
will show you how much memory is "free", but that probably does not mean what you think it means (that field would be better named "wasted" or "amount memory which you should have never bought").For example, kernel does not "load" programs in memory, it maps them, so some example program of hundreds of MB will be able to run no matter in just 12KB. Also all files accessed will be cached in that same memory (called page cache) -- there is no difference between program that run in the past (so its files are cached if it runs again) or data files that were read/written to in the past (so if they're accessed again they will be faster)
So, if you have more disk than memory (quite an usual case), your "free" (AKA "wasted") memory will quickly after boot converge near 100% (actually more like 80-95%, as kernel will try to keep some of it free so it can quickly access it when there is memory pressure). That is normal, and actually wanted, as it will greatly speed up your access to disk (in best case), or just be equally good as "free memory" if nothing accesses the same files again (worst case).
So you actually want to avoid having memory "free" (which happens from time to time after memory intensive programs die).
Edit1: Also, the result above is just of the possible (and totally different) answers, as such a question is undefined. For example, instead of "Mem used" you could've used "Mem used-cached" (which would show how much memory is used when you substract disk cache) -- which might give you results like "15%" used instead of "80% used"), and might be more accurate -- depending on what exactly you're trying to accomplish
As for the process memory usage, the same thing -- there is way too many ways to say how much memory a process uses. Is the the amount the program requires (VSZ in
ps
output). Or the amount currently in RAM (RSS column). What about when there are multiple instances with shared code (for example, if you have 100 apache processes of 50MB RSS each, they do NOT use 100*50 = 5000MB RAM, but more like 200MB altogether), etc. When you know exactly what you want, only then can you proceed to calculate it (just VSZ, or just RSS, or RSS-shared, or RSS-shared/num-of-processses-sharing, etc)Also note that this type of questions is more on-topic on superuser.com
Edit2: as for your comment, you're trying to avoid memory leak is some process. Checking for free memory is definitely wrong for that, as it will give false positives. You should limit your process to prevent memory leaks from bringing rest of the system down (see
help ulimit
in bash). The process might handle that (good) or die when it can't get required amount of memory, so you can restart it (via monit, supervise, runit or similar)Edit3: in addition (or as alternative, but really better in addition) to setting process limits, you can use something like this to restart process when it's RSS grows too big.
instead of
$PID
you'd use your PID of course (for example fromPID=$(cat /var/run/something.pid)
orPID=$(pidof somedaemon)
etc.However note that you'd probably better use
VmSize
(orVmPeak
) instead of (VmRSS
), as otherwise your process can still bring system down if it gets in swap (so VmSize will be big, and VmRSS small)