I have a command (cmd1) that greps through a log file to filter out a set of numbers. The numbers are in random order, so I use sort -gr
to get a reverse sorted list of numbers. There may be duplicates within this sorted list. I need to find the count for each unique number in that list.
For example, if the output of cmd1 is
100 100 100 99 99 26 25 24 24
I need another command that I can pipe the above output to, so that I get :
100 3 99 2 26 1 25 1 24 2
If you can handle the output being in a slightly different format, you could do:
You'd get back:
Also add in the -u switch. Thus you would have:
From the sort manpage:
-u, --uniquewithout -c, output only the first of an equal run
(I'm assuming your input is one number per line, as that's what sort would output.)
You could try awk:
<your_command> | awk '{numbers[$1]++} END {for (number in numbers) print number " " numbers[number]}'
This would give you an un-sorted list (the order in which arrays are walked through in awk are undefined, so far as I know), so you'd have to sort to your liking again.