I recently became aware of a problem where we had a proxy configuration issue that resulted in slow performance for users browsing websites. Most of our IT folks have a slightly different config due to the way we access dev & test environments, so we ended up getting a bunch of vague "the internet is slow!" complaints before fixing it. A few months ago we had an problem where a bug in an application killed performance on many PCs... but we had a very difficult time detecting it.
This issue is bugging me, because it was something that we totally could have addressed proactively. The issue is that we have no instrumentation to know that it usually takes 5 seconds or 5 minutes to run through tasks that our users do every day.
Does anyone out there know of a free/cheap tool that would allow us to script something like this:
- Load Internet Explorer, time the application start
- Go to google.com, time the page load
- Go to example.com, time the page load
- Close browser
I'd like to be able to have a script do something like this every 15 minutes to run develop a baseline and figure out what "slow" means for users. The internet is just one example, I'd see this being useful for in-house and other applications as well.
In my opinion applications themselves should support monitoring these kinds of metrics with any standard monitoring suite including setting the default warning thresholds ^^ However, most applications don't do that I guess with a few notable exceptions like Exchange with System Center Operations Manager and so on...
...in this case I'd look at it more like a user and usability study problem. Doing over-the-shoulder testing of user workflows regularly would be a useful start even though it's not automated.
Applications killing performance on clients could be caught with proper performance monitoring, though it needs to include all kinds of metrics that can slow a PC to a crawl like cpu and memory load, disk and network I/O load and pattern and so fourth - just like with server monitoring.
I understand the dev and test environment access but I'm a strong proponent of having at least the first line support guys on identical standard images, network and so fourth as the rest of the users - if this is impossible to implement for everyone in the department.
Using remote management servers/multi-user workstations for day-to-day admin work is an easy way to not have to rely on the local pc being set up in a specific way or with specific tools.
I really like the idea of monitoring for slowness before users notice anything.
I would try to tie it in to whatever monitoring software you're already using (Nagios, etc) for convenience.
The Cucumber framework looks interesting - http://cukes.info/ and there's a Nagios plugin for it. (Google "Cucumber-Nagios")
You could also script Internet Explorer with Powershell or another scripting language. I always found that more cumbersome, though.
If you need to know the performance of your website and would like to diagnose the problems later, then you need network monitor software.You know know more about network monitor by
For free,wireshark is good. For commercial usage,Capsa is suitable.
Have you looked at HP SitScope? Not only will it pull your system information, including potentially SNMP information from your proxy server, but you can run application performance monitoring scripts for web. This application sampling technology is a shared technology component with HP LoadRunner and HP Business Availability Center, which is essentially a GUI-less browser completing the scripted tasks. Alerting and reporting is built into SiteScope.
Something that you could do on a scripted basis as well would be to take a look at using CURL with some timers around the beginning and ending of the request. The PCODE might look something like this
You could easily use a chron task to run the above every fifteen minutes or so. Use your favorite scripting language to execute the operation.