I wrote a simple bash script to backup certain files daily to a backup mount and keep the last 3 days of backups. It's obviously too simple, as I'm occasionally getting odd behaviour that could be explained by the first mv being excecuted before the rm is complete.
Here's the script:
#!/bin/bash
mount /mnt/backups
while [ ! -d /mnt/backups/dailyBackup-0 ]
do
echo "Backup mount not present, sleeping..."
sleep 30
done
rm -r /mnt/backups/dailyBackup-2
mv /mnt/backups/dailyBackup-1 /mnt/backups/dailyBackup-2
mv /mnt/backups/dailyBackup-0 /mnt/backups/dailyBackup-1
dirname="/mnt/backups/dailyBackup-0"
mkdir $dirname
cd /
rsync -qr --stats root etc var $dirname
umount /mnt/backups
Although this is fine a lot of the time, I sometimes end up with the following, which looks like dailyBackup-1 is being moved before dailyBackup-2 has finished being deleted. If that is what's happening, what is the best way to prevent it?
/mnt/backups/dailyBackup-0:
total 0
drwxrwxrwx 1 root root 0 2010-12-07 03:27 var
drwxrwxrwx 1 root root 0 2010-12-07 02:39 root
drwxrwxrwx 1 root root 0 2010-12-07 02:38 etc
/mnt/backups/dailyBackup-1:
total 0
drwxrwxrwx 1 root root 0 2010-12-06 03:26 var
drwxrwxrwx 1 root root 0 2010-12-06 02:32 root
drwxrwxrwx 1 root root 0 2010-12-06 02:32 etc
/mnt/backups/dailyBackup-2:
total 0
drwxrwxrwx 1 root root 0 2010-12-07 02:36 var
drwxrwxrwx 1 root root 0 2010-12-05 03:21 dailyBackup-1
the problem is most likely that rm fails, note that var is still there in dailyBackup-2, most likely because some file in there could not be deleted.
as a general note about writing system management shellscripts:
a) always make sure you check the (error-)output of your scripts, you will automatically receive it via mail from cronjobs, unless your email setup is broken
b) always make sure you handle any and all errors that might occur (for example, rm or mv failing) it's a good idea to put set -e on top of your script, that will make the shell exit when it hits the first unhandled error (for debugging, also add set -x, that will print all commands being executed, so you can see what the script is doing)
and to answer your original question too: rm will never exit before deleting all files, or more correctly, before the unlink() system call for that last file it found completes. (the only case i could imagine where files might still be there after being unlinked might be some obscure buggy network filesystem...) but rm exiting does not mean all files were successfully deleted (even if you are root and are using -fr (you are not even using -f)), for example if files are marked as immutable on ext* filesystems, or if files were newly created while rm was traversing the tree. rm will report that with an error message and unsuccessful return stats tho'
Try changing this
as
so each command will be run only if the previous one has completed successfully (or, in other words, exited with status 0).
Are the files on another server, accessed via nfs mount? If NFS is set up with soft mounts it does not guarantee operations will complete.