I am running the following command every 5 minutes in my crontab to keep Phusion Passenger alive.
*/5 * * * * wget mysite.com > /dev/null 2>&1
When I run this it performs a wget on the site url routes STDOUT/STDERR to /dev/null. When I run this from a command line it works fine and doesn't produce an index.html file in my home directory.
When it runs from cron, it creates a new index.html file every five minutes, leaving me with a ton of index files which I don't want.
Is my syntax incorrect for running the cron job? From a command line it works without a problem but from cron it generates a index.html file in my home directory.
I'm sure I'm making a simple mistake, would appreciate it if anyone could help out.
You could do it like this:
Here
-O
sends the downloaded file to/dev/null
and-o
logs to/dev/null
instead of stderr. That way redirection is not needed at all.Do you need to actually download the contents or just receive the 200 OK? If you only have to have the server process the request, why not simply use the
--spider
argument?I would use the following:
The
-O -
option makes sure that the fetched content is send to stdout.You say you only need the "200 OK" response in a comment.
That allows for solution with some additional advantages over those of
wget -O /dev/null -o /dev/null example.com
. The idea is not to discard the output in some way, but not create any output at all.That you only need the response means the data that is downloaded into the local file index.html does not need to be downloaded in the first place.
In the HTTP protocol, the command 'GET' is used to download a document. To access a document in a way that does everything except actually downloading the document, there is a special command 'HEAD'.
When using 'GET' for this task, the document is downloaded and discarded locally. Using 'HEAD' does just what you need, it does not transfer the document in the first place. It will allways return the same result code as 'GET' would, by definition.
The syntax to use the method
HEAD
withwget
is a little odd: we need to use the option--spider
. In this context, it just does what we want - access the URL with 'HEAD' instead of 'GET'.We can use the option
-q
(quiet) to makewget
not output details about what it does.Combining that,
wget
will neither output anything to stderr, nor save a document.wget -q --spider 'http://example.com/'
The exit code tells us whether the request was successful or not:
For a command in
crontab
, the fact that there is no output in both cases means you can use getting no output as an indication of errors again.Your example command would be changed to this:
This has the same advantages as
wget -O /dev/null -o /dev/null example.com
. The additional advantage is that the log output, and the document output, are not generated, instead of generated and discarded locally. Or course the big difference is avoiding to download and then discard the document,index.html
.May your question should be about this, the webpage says:
This shouldn't require any keepalive scripts.
Otherwise kasperd's solution is perfect.