I am currently experiencing very bad performance using the following on an NFS network folder:
time find . | while read f; do test -L "$f" && f=$(readlink -m $f); grp="$(stat -c %G $f)"; perm="$(stat -c %A $f)"; done
Question 1) Within the loop permissions are checked using the variables grp and perm. Is there a way to lower the amount of disc I/O for these kind of checks over the network (e.g. read all meta data at once using find)?
Question 2) It seems like the NFS isn't tuned very well, the same operation on a similar network link via SSHFS take only one third of the time. All parameters are auto-negotiated. Any suggestions?
Your line is performing three calls for each file; a single
stat
+ parsing the output would be enough. For starters, re-design your script to callstat
only once withstat -c "%n %G %A"
... if you need help with that, throw us a comment.Fastest solution I found during the last hour was:
which checks for some group owner and permission as an example. This version using only find but not stat and following symbolic links is at least by a factor of 100 faster.