I am wondering specifically about the "Process/% Processor Time" counter. If you set it to an interval of say 10 seconds, are the data points a snapshot of what the CPU utilization is at that 10 second interval, or an average of the utilization over the past 10 seconds? It would seem to be to naturally be the former, not the latter, but there has been some confusion amongst myself and my colleagues and I wanted to clarify.
Both. :)
Some things like available MB on a disk would be a snapshot - no reason to average that.
However, things like processor performance are "cooked" using a "cookingtype" or formula. So, basically it's an average. http://msdn.microsoft.com/en-us/library/aa392761%28VS.85%29.aspx
I had to write something that took the raw performance counters at two intervals, then did some math based on time between. You won't get the same values as you see in perfmon without doing the math based on time - so it's a average.
You can search MSDN for which formula governs the raw performance data of the thing you're loking for (net utilization, proc perf, etc.) for what you want, and see the cookingtype for it - to seal your debate with your colleagues.
http://msdn.microsoft.com/en-us/library/ms974615.aspx
Excerpt from the article: