I wrote a simple shell script which uses the 'find' command so non-technical users can do a search on the file server through a web browser on the LAN. It works, but sometimes it has a long execution time, and then the follow-up searches are quick.
Is there a way to force a cache for 'find' or perhaps run something in the CRON to do this daily or a few times a day so users don't have to wait so long to get their first results?
The
locate
command is designed to do exactly what you want. The software package includes a (typically daily) cron job that creates a database of existing files, and then thelocate
command will use that database to search for files instead of crawling through the filesystem tree.Your Linux distribution may even offer you several implementations of
locate
: in addition to the implementation included in GNUfindutils
, there may be newer implementationsslocate
and/ormlocate
.Both
slocate
andmlocate
will filter out any search results the user running the search would not be able to access, according to file and directory permissions.mlocate
will also speed up database updates by detecting which directories have not changed after the previous database update.The answer provided by @telcoM is probably the correct one you should follow. However, around ten years ago I once had no choice other than to keep the whole directory tree cached.
My solution back then was to put something like
to my cron, so the directory tree was scanned very often and remained cached.