Any tip about software to monitor if a web server is up and running on linux ? It should be able to run with not knowing anything more than the URL. And it must have functionality to send an email alert when the site goes down. Should not be hard to write a script for this myself but seems pointless if there is already something nice out there.
Note that I am going to monitor internal servers, so this need to be a tool that runs on my machine on the same network, not external web based services.
And note that small and simple solutions are preferred.
Update: I eventually created a small python script that I am currently using for this, it can be found here.
You can use wget in a script like this
And you will get an email if wget cannot access the site first time within three seconds.
Set up a cron job to run the script every few minutes.
There are many other alternatives but this is probably the simplest to set up from scratch.
You have many options, I'll give you two.
Nagios is a full-blown monitoring application capable of monitoring much much more than http, but it handles that as well. It can also create all kinds of repots ("Tell me the uptime percentage of our server/service X this week/month/year...")
Monit is another popular choice. Maybe not as feature-filled as Nagios, but nevertheless it's nice.
Well, if you want to run something yourself.
These are some options:
Or if you want a managed solution:
Personally I think Zabbix and Zenoss are overkill if you simply wish to monitor the status of a web server. But if you also plan to monitor anything else than they have more features than you'll ever need ;)
I've upvoted Richard and Janne's answer, but if you want some more detail as to what your webserver is sending and receiving, the first couple chapters of the O'Reilly book "Web Client Programming with Perl" by Clinton Wong gives a great overview of the HTTP protocol. If you want more detailed monitoring than just up/down and want to include response codes, etc, it is a fine place to start.
The book is old, but still valid. Published in 1997, O'Reilly has posted the contents of the book online for free at http://oreilly.com/openbook/webclient/ as part of their OpenBook initiative.
I would vote up Janne's answer if I had rep.
Important note about Nagios, the fact that it's full blown does not mean it's a hard and long installation, It's quite simple and friendly.
Second thing, you should really check out what your hardware vendor has to offer. For example, I'm using HP Proliants and they have really nice rpm's that help.
If you like what Nagios does, but don't want to delve into the internals, you can also check out Opsview. It is Nagios and a couple of other tools, but delivered through a nice GUI. It's a pretty good starting point.
I would agree that Nagios is a great software but if you want a freeware I would suggest you to take a look at AppPerfect Agentless Monitor. Linux Server Monitoring with AppPerfect is extremely lightweight and adds negligible overhead to target system while monitoring. You can monitor all the important statistics related to CPU, disk, network and memory using this tool. Setup is very simple and the software is very easy to use. A clear documentation and tutorial is also available here for Linux server monitoring
One solution I've been using is HashiCorp's consul.
It certainly is more than a simple script with email output, but setting up the kind of monitoring you are talking about is still very easy (a few lines of YAML or JSON).
You would most likely create a template, but monitoring a single server could be done as follows:
One reason for suggesting it is that it does allow you to go beyond monitoring 'just' the web front-end if you want to, and also allows you to group checks by service (e.g. your
report_server
service could have a check for the web front-end, one for the web back-end, and one for the primary DB, all of which would provide alerts tied to this one service).