I know this is kind of a funny request, but I'd like to disable page caching on our development database. I believe it would make it much easier to see the impact of various tweaks that are done, and I'm most curious about time to retrieve uncached pages, not so much how efficiently MSSQL can pull data from cache.
However, this must be done on the database level, so I can't just set the server's memory cache to an absurdly low level (and this has a chance of knocking out index caches which I'd rather not do, and our development system has other databases I'd rather not touch).
The commands to flush the database AFAIK will flush it across all databases, and I'd rather not do that to our development database.
Not really a mandatory thing, but benchmarking is fairly difficult to get right when I can put my entire development database in page cache.
Consider using DBCC DROPCLEANBUFFERS right before running your test. This will do what you want, as a one-time drop of the cache. You'll need to run it at the start of each test.
If you your test database isn't being used by anything else, you could detach and then immediately reattach the database. (You can do this with a couple of system stored procedure calls, sp_detach_db and sp_attach_db.) That would throw any pages out of the cache and there would be very few, if any, data pages cached until you started running queries.
Of course, if you are benchmarking something more complex than a single query and someone else runs a big query on some other database on your shared development system, your results (as measured in seconds to run the query) may be skewed since some or all of your pages could be thrown out of the data cache, making your query run longer than it otherwise would.
If your goal is benchmarking single queries (or a set of queries, or a stored procedure) with an eye towards optimizing them and you don't have your own server, I suggest that you focus on page reads and not the time it takes to run the query.
If you have a slow (development) server and a fast (production) server, with identical data and code, a query might run in 10 seconds on the slow server and 1 second on the fast server. They will both read the same number of pages. (You will find some small variation, but it's usually on the order of less than 1%.) If you improve the query so it reads fewer pages on the slow server, it will read fewer pages on the fast server. A query that reads fewer pages is almost always the best performing.
You can get page read information from Profiler, which can watch any client program. (When using Profiler, be sure to filter out any connections/databases/events/columns that you do not need and that you are not impacting the performance of the server you are profiling.)
(This used to be harder when dev systems were single core and the server might be dual or quad core. Plans on the dev system would be different for big queries, which are the ones you generally want to optimize.)
Another angle to work is to watch logical reads and physical reads for a query when running them in SSMS. Look for the "set statistics I/O on" option in the dialog box for the query options. It's under the "advanced" section. This works best for single queries or simple stored procedures. For complicated things that touch a lot of tables, it gets ugly. In that case, try to find some worst-offenders and isolate the really bad parts, then look at those parts in detail.
You will want to minimize logical reads. If you minimize the logical reads, the phsyical reads should also drop when the server needs to pull data from the disk. Also, in situations where the data you need is cached, minimizing logical reads will make a query run faster.