I was wondering if I could get some help with monitoring text on a website. I.E. If I wanted to monitor google.com for the text "Privacy", I thought I would use the following command:
check_http -H google.com -u http://www.google.com -s "Privacy"
But it is not working. I get "OK" no matter what I put in quotes. I am obviously using either the wrong command or wrong option. Please Help.
Try leaving put the -u. -u gives the path (page) to retrieve, default is "/", it doesn't take the entire URL. Here is my output when I leave it out.
If you want to get a specific page, use the -u like this
Another option is to use the check_curl from monitoringexchange.org. In reality this is a titch too complicated for what you are trying to do, but I have found it extends the functinality when you need it for parsing data from websites and inserting input
Contents of my customized non-variable check_curl below:
We wrote a custom perl script using LWP and HTML::Tree to search for particular strings. We also just md5sum some pages where the content doesn't change.
If I get a problem like this with the check_http plugin, I usually wget the URL I'm looking for the text in and then examine the output.
If you want something that does a bit more than check_http, such as logging into a web site or checking more than one string, have a look at WebInject - it's a nice plugin.
Here are a couple of checks I use.
The first looks for the text
CCServerService
and reports an error if it doesn't see it:The second checks for the text
error
and reports an error if it sees it:You need to specify the
-e
option for the-s
option to have the desired effect: