Title says it all.
Does anyone know of a way to replicate beanstalkd such that if a beanstalk server went down, others slaves could take over?
Here's one approach I thought of: I could make beanstalk write its binlog (with the -b) to a shared location, and then somehow have a secondary/backup server start beanstalkd if the primary fails.
There must be a better way though.
Since it is writing to disk via binlog, I'd think you could do something similar to what MySQL admins typically do: heartbeat w/ DRBD (example here).
The last time I tried to use heartbeat though, it didn't support non-multicast checking between nodes, meaning that it was more or less impossible to run on cloud/VPS infrastructure (AWS, Linode, Slicehost, etc). In fact, most clustering services use multicast. This may no longer be the case, but it's something to be aware of. You may be able to use keepalived to provide ip-based failover, which also only supports multicast BUT has a patch available via Willy Tarreau (author of HAProxy) to add unicast support. I have personally tested this on a pair of Linode VPS servers and keepalived is able to failover a shared IP address in the event of the master server failing.
One thing you can do which is probably less optimal is to write jobs to a number of beanstalkd servers (aka partitioning). If one of them goes down, have your app detect this and write to the other instance(s) instead. Your workers will have to intelligently poll each of the beanstalkd instances and be able to ignore dead instances. Since you are binlogging, bringing an instance back up should be as easy as restarting it and the app/workers will detect this and continue as usual (and begin processing the jobs in the newly-started instance). I'm obviously simplifying the process, but that's one other way to handle it.