For performance testing I need to clear Windows' disk read cache. I tried googling but I couldn't find anything other than rebooting or other manual stuff. Before I give in and do that, I'd like to know if anyone knows of a way to clear Windows disk read cache. I'm testing on Windows 7, but I'm also interested in Windows XP solutions.
I'd asked the same question on Stack Overflow
https://stackoverflow.com/questions/478340/clear-file-cache-to-repeat-performance-testing
I was using Win XP, but the best solution I came up with was:
Alternatively fill the cache with data that you know won't be used in the test, then run the test.
For a much better view of the Windows XP Filesystem Cache - try ATM by Tim Murgent - it allows you to see both the filesystem cache Working Set size and Standby List size in a more detailed and accurate view. For Windows XP - you need the old version 1 of ATM which is available for download here since V2 and V3 require Server 2003,Vista, or higher.
You will observe that although Cacheset will reduce the "Cache WS Min" - the actual data still continues to exist in the form of Standby lists from where it can be used until it has been replaced with something else. To then replace it with something else use a tool such as MemAlloc, flushmem by Chad Austin, or Consume.exe from the Windows Server 2003 Resource Kit Tools.
I've used RAMMap. It has possibility to free memory of few types of allocations.
The contents of the file cache can be observed in the "File Summary" tab and selecting "Empty Standby List" from the "Empty" menu should clear this cache.
Reboot the machine.
The best practice is to ensure that any test files that you are benchmarking with are 2x larger than the array controller cache (or windows os memory if benchmarking in a vm guest), with a minimum 1 GB test file. This ensures that any caching will be negated. We use SQLIO for disk benchmarking; there is a wealth of information in the accompanying documentation.
Echoing Greg, the way to work around this problem is to ensure that the data-set you're working with greatly exceeds the amount of available RAM. If you're doing testing on a hardware platform that also includes significant amount of controller and disk based caches, you'll want to ensure you're exceeding those amounts as well. This will ensure that the performance you see is more tied in with true hardware performance than the software optimizations all those layers of cache introduce.
That said, if you're really just looking to purge the read cache of useful data before running your benchmarks which WANT to use read-cache, the way to do it is to read in a single file sized just under your read-cache memory and do some file operations to it. This will purge the cache and fill it with this single large file. Once you close it, your cache is effectively flushed of the data you care about. The tricky part is figuring out how large that file needs to be, which these days could very well be on the order of 3GB in size; at which point you may need several 1GB junk-files to make it work.
I think most any option will be "manual stuff". Under UNIX, it's pretty standard to unmount/remount a device before each benchmark run, often with a "newfs" thrown in for good measure. I don't know if you can use command-line tools under Windows to unmount/mount devices, but if automation is your goal, then it would be worth looking for such utilities.
Take a look at this answer I just posted to my own question on StackOverflow.
Basically: If you attempt to
CreateFile
a handle without write-share access, then it will flush and then invalidate the cache, even if it returns an error.Hope this helps!