Working on medium sized websites we've always built our own real-time traffic graphing solutions and displayed those on a big screen mission control style so that if traffic starts to climb, load start to climb, latency increases, etc we can start watching it pro-actively before the monitoring system goes off.
Now I'm starting at a new company and we need the same thing, are there companies that have mission control style real-time website monitoring software products? Not web analytics, but real-time graphing of things like simultaneous users, page views, hits, avg HTML render time, etc. It would require installing some kind of agent on each web server (or the load balancer) since Javascript tracking is insufficient to detect for example when a spam-bot starts pummeling the site. I've googled and can't find anything.
Best tool i have seen is http://www.splunk.com it looks at your logfiles in real time and charts data based on your queries.
Sounds like something Reconnoiter could do, maybe combined with (sys)log shipping or some form of statistics from the httpd
Maybe some RUM monitoring tool would do the job, as it monitors performance from the perspective of users. You could try simple RUM tools like http://www.gear5.me (requires only simple js snippet) or more complex such as http://www.newrelic.com ,which requires a module on server for data acquisition.
There are a number of companies that sell appliances that tap the network, sniff the traffic, and put together the kinds of stats you're looking for.
CoRadiant, CA Wily CEM, CA NetQos Super Agent, ...
Or you can build one with WireShark :-)
(Disclaimer: I am occasionally a paid CEM consultant)
Can't see how you would do "avg HTML render time" without a bit of Javascript, but everything else you can script up using MRTG, assuming your server environment is *nix or you have Cygwin installed.
Being Pro-active is great, but your monitoring package should go off if anything out of the normal happens, not after something goes critical. Unless you plan to have someone watch graphs 24/7.
We're using collectd to gather data and different web-frontends for rrd: collection3, Drraw to display it.
I am not sure if you have a budget for such tools -- if you do, I would suggest using Splunk. I use it for a similar monitoring setup. Web server access / error logs are sent via syslog to Splunk and we've built dashboards to provide us with specific views into that data. We can also display dashboard graphs in real time since the newest version of Splunk now includes a "real-time" timeframe.
Since there is also logic in our applications that feed data to Splunk via syslog, we are then able to correlate application performance data together with traditional web server logs and extract useful information from it. One of those things happens to be processing and rendering metrics that you mentioned.
YMMV as Splunk really needs the knowledge and acumen of a good administrator to set up and architect. I've seen more than a handful of people swear off Splunk only to find out it's because its hardware needs were architected so poorly that it was basically tripping over itself to fail. Its search language also has a bit of a learning curve if you're not already a power user. It's not for everyone, though I can attest that when it's architected correctly and you seed it with lots of data, it quickly becomes an indispensable monitoring and reporting tool.