Or should perfmon be limited to a Dev/QA server with load tests that simulate production activity?
I'd like to run perfmon for two days (like Sql Server master Brent Ozar suggests) to get an overall feel of my web app's database performance.
Or should perfmon be limited to a Dev/QA server with load tests that simulate production activity?
I'd like to run perfmon for two days (like Sql Server master Brent Ozar suggests) to get an overall feel of my web app's database performance.
SQL Server, and most other products, generate the counters all the time, no matter if there are listeners or not (ignoring the -x startup option). Counter tracing is completely transparent on the application being monitored. There is a shared memory region on which the monitored application writes and from which monitoring sessions read the raw values at the specified interval. So the only cost associated with monitoring is the cost of the monitoring process and the cost to write of the sampled values to disk. Choosing a decent collection interval (I usually choose 15 sec) and a moderate number of counters (50-100), and writing into a binary file format usually leaves no impact on the monitored system.
But I'd recommend against using Perfmon (as in perfmon.exe). Instead get yourself familiar with with logman.exe, see Description of Logman.exe, Relog.exe, and Typeperf.exe Tools. This way you don't tie the collection session to your session. Logman, being a command line tool, can be used in scripts and scheduled jobs to start and stop collection sessions.
There's nothing wrong with running perfmon on production boxes. It's relatively low key, and can gather a lot of good info for you. And how would you accurately simulate production loads if you didn't run some analysis on the production server? From Brent Ozar in your own link:
I've run perfmon on a number of production Exchange boxes with no adverse effects.
Ever since I listened to Clint Huffman, who wrote PAL a utility for analyzing Perfmon Logs, on a podcast once. I have setup what I call the Flight Recorder on all of our production application servers. This practice has come in very handy for diagnosing problems and monitoring trends.
Below is the script I use to setup an auto-starting Perfmon Collector, with log purging. If desired, it can be fed a file listing performance counters to collect (one per line) or a PAL Threshold XML file. I like to use the PAL Threshold files.
We do it quite frequently. It is also essential for establishing a baseline in the real environment, so you can compare later if there are issues or you need to perform a capacity study.
I recommend not going below a 10-second interval though. If you are collecting many objects/counters and the interval is too frequent, it may impact operations.
Microsoft has a PerfMon Wizard that will setup the task for you.
http://www.microsoft.com/downloads/details.aspx?FamilyID=31FCCD98-C3A1-4644-9622-FAA046D69214&displaylang=en
In an ideal world where a production server exactly mirrors what a dev server, does and is also an exact duplicate of the dev server, perfmon should never be required on the production server because the results would be the same as those on the dev server. Of course that mythical situation never happens, so we do need to run perfmon on production servers and there is absolutely nothing wrong with that. Amongst other things, we may need to use perfmon and other tools to learn why the production server isn't behaving the same as the dev server.
Why perfmon? I mean, recent versions of SQL server have their own method of doing that including building a (central) data warehouse of performance counters that can then be queried and reported against. There is zero sense in running perfmon there.
I am, like always, astonished by all the posts here of people who obviously never read the documentation ;)
http://www.simple-talk.com/sql/learn-sql-server/sql-server-2008-performance-data-collector/ is a good start. IMHO that should work on almost every sql server that is used for production purposes.
Nothing wrong with running Perfmon as many have suggested, but I would run Profiler instead or in addition, with the same caveats, don't capture too much too often, just capture long running queries, i.e. duration > x seconds, or cpu > xx, or reads > xxxx; very little impact, and you'll quickly see the queries that would benefit most from tuning.