We use sys.dm_exec_query_stats
to track down slow queries and queries that are IO offenders.
This works great, we get a lot of very insightful stats. It is clear this is not as accurate as running a profiler trace, as you have no idea when SQL Server will decide to chuck out a an execution plan.
We have quite a few queries where the wrong execution plan is cached. For example queries like the following:
SELECT TOP 30 a.Id FROM Posts a JOIN Posts q ON q.Id = a.ParentId JOIN PostTags pt ON q.Id = pt.PostId WHERE a.PostTypeId = 2 AND a.DeletionDate IS NULL AND a.CommunityOwnedDate IS NULL AND a.CreationDate > @date AND LEN(a.Body) > 300 AND pt.Tag = @tag AND a.Score > 0 ORDER BY a.Score DESC
The problem is that the ideal plan really depends on the date selected (screenshot of ideal plan):
However if the wrong plan is cached, it totally chokes when the date range is big: (notice the big fat lines)
To overcome this we were recommended to use either OPTION (OPTIMIZE FOR UNKNOWN)
or OPTION (RECOMPILE)
OPTIMIZE FOR UNKNOWN
results in a slightly better plan, which is far from optimal. Executions are tracked in sys.dm_exec_query_stats
.
RECOMPILE
results in the best plan being chosen, however no execution counts and stats are tracked in sys.dm_exec_query_stats
.
Is there another DMV we could use to track stats on queries with OPTION (RECOMPILE)
?
Is this behavior by-design?
Is there another way we can for recompilation while keeping stats tracked in sys.dm_exec_query_stats
?
Note: the framework will always execute parameterized queries using sp_executesql
Perhaps you should use a plan guide instead of option RECOMPILE. You already have a good plan, so just add it as a plan guide for your query and the optimizer will then produce this plan every time. See Optimizing Queries in Deployed Applications by Using Plan Guides and Specifying Query Plans with Plan Forcing.
In your case is really trivial, simply call
sp_create_plan_guide_from_handle
with the good query plan handle: