I sometime need to check some logs and I do this with this command:
egrep -o "success|error|fail" <filename> | sort | uniq -c
Sample input:
test error on line 10
test connect success
test insert success
test started at 00:00
test delete fail
Sample output:
1 error
1 fail
2 success
I would like to know if someone knows a way to do this with a shorter command?
Before you ask why I would like to do this with an different command... No special reason, I'm just curious :)
No, I think that you are as good as it gets. Naturally, you could do it with one perl script,
...but it is more complex and less intuitive.
Here is the
awk
way of doing itBut all these one liners will be bit lengthier than our good old
grep
Not much shorter, but since you don't really need the regular expression, there's
fgrep
(grep -F
).another way to write the same thing in bash:
You could write a simple bash script and then call the script, like:
and save it as (for example)
myscript.sh
. Then do achmod +x myscript.sh
and you can call it likemyscript.sh <filename>
.Your command, while short and sweet, is a rather circuitous way to count occurrences of a term. I'd probably take the blunt, direct approach and use grep's -c flag (which does exactly that) inside of a shell loop:
Not as short, not as exciting, potentially faster for large logfiles (no
sort
). I'd say it's a wash.This could be a dummy answer but I think, in this case,
sort
is quite useless; maybe you can omit it. Nevertheless here we are using three different commands for three different actions.We can short it if some of them con be reached with some option of
grep
, but I don't see which... :)