This kind of touches on another post I made but is different so I have posted it as a new question.
I have a script that runs that may take just over 1 minute to process, and my cron is set to run every minute. I can stop another cron job executing the script if the first one hasn't finished by using flock (php) in the file, however, this means that I would lose one iteration of the routine and have to wait nearly a minute before it is triggered again (as my understanding leads me to believe).
What I would like to do is if the script is locked, not bomb out, but wait. Over time however, this could get quite high so I would also like to limit the amount of queued cron's to 10.
I am a real newbie with Linux (had a Linux VPS for 3 days now) so I am not sure if my solution is even practical.
Thanks.
I would say that each job could create a lock file and wait on the newest existing lock file. If the count of lock files is 10, then exit instead of creating and waiting.
Why does it matter if a single run misses? At this point, I think you need to go beyond a cron job and a script.
You're getting into a programming question here. You could have each run of the script, if it decides to wait, read a counter in a text file and then increment it while waiting, and then decrement it when it finally gets to run. If the counter is read to be over 10, then you acn just exit. But now you've got to make sure nothing tries to read or write the counter at the same time.
At this point, why don't you write this as a daemon so that it can actually track its own state? You may even want to write it as a server and client.
As others have pointed out, cron is not the right tool here.
Things to watch out for with lockfiles: * You have to make sure they're cleaned up on exit or server reboot. If the process is killed and the lockfile is left, you may run into a situation where you never run as there are 10 stale lockfiles. * Either use a program that's specific for lock files or use directories. mkdir will error if the directory exists, touch will always return success and you open up a race condition.
What is the difference queue of 1 or 10, once you start queuing it's not like your cronjob will speed up all of a sudden and things from queue just get done in less than before.
I'd say flocking, and checking is just fine;if you really want to do it like you asked, you can implement own locking mechanism,and queue up processes as long as they read from some file count smaller than 10 or something.
Like cronjob wakes up; checks if /tmp/count exists, if not echo 1> /tmp/count if it exists increment value of number in /tmp/count; when cronjob script is done, decrement the /tmp/count value if file exists if decrement from 1 to 0 delete file; when you reach count of 10 in file,simply exit the script.
i would skip cron here. just monitor all the files and use them when they are not locked as needed. is this possible?
You could use a program called The Fat Controller. It works like CRON, regularly running a script however instead of running every x minutes or seconds, crucially it waits x minutes or seconds from the time the script ends before starting it again.
It also has other features such as:
Have a look at the webiste: www.4pmp.com/fatcontroller/
There's more information plus use cases.