From my script output I want to capture ALL the logs data with error messages and redirect them all to log file.
I have script like below:
#!/bin/bash
(
echo " `date` : part 1 - start "
ssh -f [email protected] 'bash /www/htdocs/server.com/scripts/part1.sh logout exit'
echo " `date` : sleep 120"
sleep 120
echo " `date` : part 2 - start"
ssh [email protected] 'bash /www/htdocs/server.com/scripts/part2.sh logout exit'
echo " `date` : part 3 - start"
ssh [email protected] 'bash /www/htdocs/server.com/scripts/part3.sh logout exit'
echo " `date` : END"
) | tee -a /home/scripts/cron/logs
I want to see all actions in file /home/scripts/cron/logs
But I only see this what I put after echo
command.
How to check in logs that SSH
command was successful?
I need to gather all logs. I need this to monitor result of every command in my script, to better analyse what's going on while script fails.
I generally put something similar to the following at the beginning of every script (especially if it'll run as a daemon):
Explanation:
exec 3>&1 4>&2
Saves file descriptors so they can be restored to whatever they were before redirection or used themselves to output to whatever they were before the following redirect.
trap 'exec 2>&4 1>&3' 0 1 2 3
Restore file descriptors for particular signals. Not generally necessary since they should be restored when the sub-shell exits.
exec 1>log.out 2>&1
Redirect
stdout
to filelog.out
then redirectstderr
tostdout
. Note that the order is important when you want them going to the same file.stdout
must be redirected beforestderr
is redirected tostdout
.From then on, to see output on the console (maybe), you can simply redirect to
&3
. For example,will go to wherever
stdout
was directed, presumably the console, prior to executing line 3 above.As I read your question, you don't want to log the output, but the entire sequence of commands, in which case, the other answers won't help you.
Invoke shell scripts with -x to output everything:
sh -x foo.sh
Log to the file you want with:
sh -x foo.sh >> /home/scripts/cron/logs
to get the ssh output to your logfile, you have to redirect
stderr
tostdout
. you can do this by appending2>&1
after your bash script.it should look like this:
when this does not display the mesages in the right order, try to add another subshell:
In bash, you can put
set -x
and it will print off every command that it executes (and the bash variables) after that. You can turn it off withset +x
.If you want to be paranoid, you can put
set -o errexit
in your script. This means the script will fail and stop if one command returned a non-zero exit code, which is the unix standard way to signal that something went wrong.If you want to get nicer logs, you should look at
ts
in themoreutils
debian/ubuntu package. It will prefix each line with a timestamp and print it out. So you can see when things were happening.Following off of what others said, the set manual is a good resource. I put:
At the top of scripts I wish to keep going, or
set -ex
if it should exit upon error.I've found that the @nicerobot (How can I fully log all bash scripts actions? ) answer might be not enough to completely redirect all output to the console. Some output still can be lost.
The complete redirection looks like this:
Explanation:
echo
calls both to console and to the file, so the pipe-plus-tee is the only way to split the output stream.(...)
operator to either redirect each stream into a standalone file or to restore all streams back to the original by separate steps. The second reason becausemyfoo
calls before the self reentrance has to work as is without any additional redirection.NEST_LVL
variable to indicate the nest call level.IMPL_MODE
variable.TL;DR - Yesterday I wrote a set of tools for logging program runs and sessions.
Currently available at https://github.com/wwalker/quick-log
As an admin, I'm always wanting to log the output of some command, often not a script. To solve that problem, I've written a few things. The easiest is to use the program "script" as xX0v0Xx mentioned. I found that calling script (without any arguments) would often result in me overwriting the output of a previous script. So I created this alias. All it does is prevent overwriting. You need a ~/tmp directory.
That is great when I want to catch an interactive session.
When I want to log the output of a command (script or binary), I either want the exact output, or I want the output with timestamps in front of each line. So I wrote these two bash functions:
Just run your command like you normally would:
or
This just created 2 file called:
But, Wait there's more!!
I didn't want to have to do an ls to find the name of the log file that justlog or timelog just created for me. So, I added 3 more functions:
So, you run your command with justlog (or timelog), and then you just use lesslog or viewlog (I'll probably create an emacs log for Those people):
That's it, no
ls ~/tmp
, no tab completion games to find the file name. Just run lesslog (or viewlog if you like using vim to look at logs).But, wait! There's more!
"I use grep all the time on my log files" - And the answer is, you guessed it,
greplog
First, get the text from all 800 server's /etc/cron.d/atop files that are broken:
Then get the hostnames (on the line above the output in the file :-) ) with greplog:
You can simply use "script".
man script
for details.Example:
So, here is an option I used and consider to be a little more friendly. Essentially, define a task function at the top of your script, and define all of your sections with their own functions to be passed to calls of the task function. It could be extended to add some specific INFO, WARN, DEBUG, and ERROR flags for your logs as well. They would simply be functions that use an echo to wrap your message in a consistent way, as all of the outputs are going to your log anyway.
It would look something like this:
LIMITATIONS and CAUTION: