I'm testing Tokyo Tyrant in a master-master setup and have found the ulog grows out of control and locks up the disk.
At first I found the -ulim option useful and limited the logfile size, however it simply rolls over to a new log, leaving the old ones to clutter up the partition.
I suppose I'll write a shell script that will delete ulogs older than X, once I find out how far back Tokyo Tyrant needs in the update log in order to failover.
Does anyone have any experience with this Tokyo Tyrant? Do you have a feel (acknowledging that every install is different based on what is being stored) for the optimal ulog size vs how far back a Tokyo Tyrant instance needs to look in the ulog to assume master status?
Thanks, nathan
Just to follow up, below is from Mikio Hirabayashi's (TT developer) reply to a similarly worded e-mail:
Running that command will allow you to see how far back a slave is behind a master. Once you know that you can spend some time finding the right ulog turnover size and haw many ulog files back you can trash and feel safe. Probably best to do it under a load that simulates a heavy day on your Tokyo Tyrant key/value databases etc..
I shamelessly ripped off a script from stackoverflow:
@kubanskamac's answer was correct in abstract, but Mikio gives the command to start optimization.
FYI, I've written a ulog management script that takes the replication delay into account:
http://conigliaro.org/2010/04/28/tokyo-tyrant-update-log-ulog-management/
Disclaimer: this is the first time I've even heard about Tokyo Tyrant. I just see some familiar patterns looking at the docs.
In the transactional systems (e.g. databases) I know, attention is paid to two types of unexpected events:
Each log passes usually through three stages of existence:
I have no idea how to figure out which of your ulogs are committed by Tokyo Tyrant. But maybe this general outline will help.