I have a cronjob for rsync which runs every 2 minutes, but sometimes takes longer then 2 minutes. For that I wrote a simple locking system which checks, if file not found writes lockfile, takes action and deletes that lockfile.
Now if the script crashes or is killed for some reason that lockfile will linger on and cause problems.
What would be a good way of making sure the cron task is run again at some point? I have thought of checking the age of the lockfile and deleting it if older then a certain period, but there ought to be better more elegant solutions for this I'd think..
edited:
I now have implemented flock. I was a bit confused as to why the file always seems to exist, but I found this page that explains how it works, by storing the pid in the file information:
http://mattiasgeniar.be/2012/07/24/prevent-cronjobs-from-overlapping-in-linux/
Store the PID of the critical process as part of the lock and when you run the script again check to see if the process is still active.
A better way is to use a lock directory rather than a lock file as mkdir is an atomic operation. You don't have to check if the lock exists and then create it if not, which leaves a window of opportunity for something else to get the lock. Put the PID of the critical process in the lock directory as a file etc.
On Linux you can use the flock utility which handles all this for you.
Check if the process and the lockfile exists. If only one of them exists, something is wrong and must be handled correctly. E.g. If the lockfile exists and the process is not running, delete the lockfile and move on.
I know you have a your own locking system, but I will do it with fcron and exesev(false)