I need to gather statistics on how long it takes to retrieve a web page once every couple of seconds.
I can do a
time wget --spider http://www.google.com/index.html
(spider will not download the page, just check that they are there)
with this command I can see how long it took to run the command and the status of the page (200 OK, 404 NOT FOUND, etc)
The issue I'm having is that I need to keep track of the statistics. So if I hit a web page every couple of seconds and every so often I get a 404 I need to see those stats.
What do you think about this?
This will give you an output like:
Which you can then parse into a spreadsheet. It may, admittedly, not be the most elegant solution. You can clean it up with a grep chain (like
grep -v Location: | grep -v Content-Type: | grep -v Date
etc) or something more elegant. I'm curious to see what others come up with!Check out Nagios for monitoring, or even if you don't want a full Nagios install check out the Nagios plugins, check_http will give you performance stats as well as a simple parse-able output. You could use a shell script to put this into a file which you could process later.
You could try something like this to get you going
If you use
>>
to redirect the output to a file you can then either pull it into a spreadsheet or use grep etc to pull information from it.