Is there a good way to prime a ZFS L2ARC cache on Solaris 11.3?
The L2ARC is designed to ignore blocks that have been read sequentially from a file. This makes sense for ongoing operation but makes it hard to prime the cache for initial warm-up or benchmarking.
In addition, highly-fragmented files may benefit greatly from sequential reads being cached in the L2ARC (because on-disk they are random reads), but with the current heuristics these files will never get cached even if the L2ARC is only 10% full.
In previous releases of Solaris 10 and 11, I had success in using dd
twice in a row on each file. The first dd
read the file into the ARC, and the second dd
seemed to tickle the buffers so they became eligible for L2ARC caching. The same technique does not appear to work in Solaris 11.3.
I have confirmed that the files in question have an 8k recordsize, and I have tried setting UPDATE: zfs_prefetch_disable
but this had no impact on the L2ARC behaviourzfs_prefetch_disable
turns out to be important, see my answer below.
If there is no good way to do it, I would consider using a tool that generates random reads over 100% of a file. This might be worth the time given that the cache is persistent now in 11.3. Do any tools like this exist?