I use Berkeley DB's db_archive to move unused transaction logs to a backup location for the purpose of disaster recovery. The intention is to reduce the data loss window to N minutes. Consequently, I execute db_archive every N minutes. If the transaction throughput is high enough so that the maximum transaction log size causes a new transaction log to be created and database checkpoints occur that cause the old log to become unused, that strategy works as expected. If, however, the throughput is minimal, archiving would only occur once the maximum transaction log size is hit and a checkpoint frees the old log. Consequently, those changes cannot be restored because logs are not being archived.
Is there a way to force a transaction log rotation to make sure that even changes that do not cause a new transaction log to be created will get archived at regular intervals? By reducing the maximum transaction log size, the situation could be improved but not resolved. All I can come up with is forcing a rotation by writing a custom appropriately sized transaction log entry using the Berkeley DB API DB_ENV->log_put() before triggering a checkpoint and calling db_archive but that still doesn't sound like a production-ready solution.
Thanks in advance for any additional info on this subject!
It can be done but, according to existing call sites in the 4.7.25/6.2.23 source, the scenario definitely requires code to do so (none of the utilities call the helper). The internal function
__log_newfile
can be used to switch to a new log file:After a checkpoint, the old log file will become unused and eligible for archiving.