We have a beefy Windows 2008 x64 server (4 x 4 core CPU, 32GB RAM) running SQL Server 2005 64-bit. We have a small (6GB) but very important database that is somewhat slow to access until the pages are cached in memory (the usage is very much random I/O so the odds are very low a given page is in memory and the end users complain about the initial slowness). The disks are fast enough (local 15K SAS) but I guess the app is somewhat clumsily written (it's a COTS solution) so I am wondering if there's a way to "force" a database in memory in SQL Server 2005 (2008 is not supported by the vendor, so we shouldn't upgrade to that yet) to help avoid the initial cache-filling blues?
My current method is that I run a SELECT * from each table in a script to get data pages in memory but some objects (indexes, Full text search, etc.) are not cached by this method (and modifying the script to interrogate indexes and write appropriate WHERE clauses to cache is boil-the-ocean complex).
No, there's not a way to force a database into cache, unfortunately. Your brute force method is probably the most straightforward. You might be able to get closer by using index defrag scripts with a very low threshold setting, like saying rebuild the index if it's 1% fragmented, like this:
http://sqlserverpedia.com/wiki/Index_Maintenance
It'll take longer and involve more writes to the disk, but it'll have the side effect of defragging your indexes and updating statistics, which is a good idea anyway.
Ok - I can't comment on Brent's answer (yet, as I don't have enough reps) - but if you're going to go the defrag route, don't necessarily rebuild the index - as that will build new indexes, possibly growing the database if there isn't enough free space, and guaranteeing that your next log backup is at least the size of your indexes and your log may have a ton of log records in too (depending on recovery model). If you're going to do the defrag route, do an ALTER INDEX ... REORGANIZE, which doesn't require any free space (well, one 8k page) but will read the leaf-level into memory and only operate on the fragmented pages. The non-leaf levels should come in quickly after some queries and (depending on fan-out) should be a lot less data than the leaf level.
Why are the database objects flushed from the cache in the first place? Are you restarting the SQL services or taking the database off/online? Or are they being pushed out by caching from other databases?
Regards,
SCM.
If the database is that small, consider putting it on SSD?
I've had some scenarios were updating statistics with FULLSCAN on key tables has forced data into cache and made my subsequent DMLs around those tables a lot faster. And this was not a result of out of date statistics as it resulted in no changes in the execution plans.
Why don't you install a second instance of SQL Server with only that database, and set the minimum memory for that instance to 6GB?
This will guarantee that your other databases never poach memory from your "small but very important" database.
It will also mean that you can take the other instance offline and your small DB will remain in memory.
I'd use profiler to check your sql. Compare 'logical' reads to 'physical' reads. SQL server is smart and will use the RAM it needs for the most effiecient results.
Also check that your autostats are upto date.
Without more idea of the type of query and the db table sizes it sounds a bit strange.