The unix find(1)
utility is very useful allowing me to perform an action on many files that match certain specifications, e.g.
find /dump -type f -name '*.xml' -exec java -jar ProcessFile.jar {} \;
The above might run a script or tool over every XML file in a particular directory.
Let's say my script/program takes a lot of CPU time and I have 8 processors. It would be nice to process up to 8 files at a time.
GNU make allows for parallel job processing with the -j
flag but find
does not appear to have such functionality. Is there an alternative generic job-scheduling method of approaching this?
xargs
with the-P
option (number of processes). Say I wanted to compress all the logfiles in a directory on a 4-cpu machine:You can also say
-n <number>
for the maximum number of work-units per process. So say I had 2500 files and I said:This would start 4
bzip2
processes, each of which with 500 files, and then when the first one finished another would be started for the last 500 files.Not sure why the previous answer uses
xargs
andmake
, you have two parallel engines there!GNU parallel can help too.
Note that without the
-j8
argument,parallel
defaults to the number of cores on your machine :-)No need to "fix"
find
- make use ofmake
itself to handle the parallelism.Have your process create a log file or some other output file, and then use a Makefile like this:
and invoked thus:
Better yet, if you ensure that the output file only gets created on successful completion of the Java process you can take advantage of
make
's dependency handling to ensure that next time around only unprocessed files get done.Find has a parallel option you can use directly using the "+" symbol; no xargs required. Combining it with grep, it can rip through your tree quickly looking for matches. for example, if I'm looking for all files in my sources directory containing the string 'foo', I can invoke
find sources -type f -exec grep -H foo {} +
All the suggestions make the execution run in parallel but if your file tree is large enough the bottleneck may be in the find itself. A colleague of mine wrote locar as a parallel search which is very useful when your filesystem can do scans in parallel. It might not help if your filesystem is on a single HDD but if it is a raid device, an SSD or better yet a distributed filesystem it will help tremendously.
locar will do the file scan in parallel on multiple directories so you will get the list of files faster and can then also combine it with xargs or parallel to run things in parallel as well.