If you must use bash, then you can use an array to save the file contents and then iterate over the elements or you can read the file line by line and do the sum for each line, the second approach would be more efficient:
$ time { nums=$(<file.txt); for i in ${nums[@]}; do (( sum+=i )); done; echo $sum ;}
19
real 0m0.002s
user 0m0.000s
sys 0m0.000s
$ time { while read i; do (( sum+=i )); done <file.txt; echo $sum ;}
19
real 0m0.000s
user 0m0.000s
sys 0m0.000s
The command numsum does just what you need by default;
$ numsum file.txt
19
Reading the test numbers line by line from stdin:
$ printf '
1
3
4
1
4
3
1
2' | numsum
19
Or reading from one line:
$ printf '1 3 4 1 4 3 1 2' | numsum -r
19
More utilities
The package contains some other utilities for number processing that deserve to be more well known:
numaverage - find the average of the numbers, or the mode or median
numbound - find minimum of maximum of all lines
numgrep - to find numbers matching ranges or sets
numinterval - roughly like the first derivative
numnormalize - normalize numbers to an interval, like 0-1
numrandom - random numbers from ranges or sets, eg odd.
numrange - similar to seq
numround - round numbers up, down or to nearest
and a more general calculator command numprocess,
that applies an expression from the command line to numbers on input lines.
You can use awk, a native linux application usefull to scanning and processing files with a pattern per line. For your question, this will produce what you want:
awk 'BEGIN { sum=0 } { sum+=$1 } END {print sum }' file.txt
Pipes are also accept:
cat file.txt | awk 'BEGIN { sum=0 } { sum+=$1 } END {print sum }'
bc
with a little help frompaste
to get the lines in a single one with+
as the separator:To use the output of
grep
(or any other command) instead a static file, pass thegrep
's STDOUT to the STDIN ofpaste
:Example:
If you must use
bash
, then you can use an array to save the file contents and then iterate over the elements or you can read the file line by line and do the sum for each line, the second approach would be more efficient:You could use awk, too. To count the total number of lines in *.txt files that contain the word "hello":
To simply sum the numbers in a file:
Use
numsum
from the packagenum-utils
!(You may need to
sudo apt-get install num-utils
)The command
numsum
does just what you need by default;Reading the test numbers line by line from
stdin
:Or reading from one line:
More utilities
The package contains some other utilities for number processing that deserve to be more well known:
and a more general calculator command
numprocess
,that applies an expression from the command line to numbers on input lines.
You can use
awk
, a native linux application usefull to scanning and processing files with a pattern per line. For your question, this will produce what you want:Pipes are also accept:
Perl solution:
The above can sum all numbers across multiple files:
For multiple files given on command-line where we want to see sum of numbers in individual file we can do this:
A simple approach is to use a built-in feature of your shell:
This reads your file linewise, sums up and prints the result.
If you want to use a pipe and only use the 1st row, it works like this:
Getting the first element is done like this:
This is a fairly simple use of
bash
scripting.