I'm looking to backup my linux server to Amazon S3 using duplicity.
I found a great resource here that helped me get it setup and am using the basic script that was listed there for me and now copied here:
#!/bin/sh
# Export some ENV variables so you don't have to type anything
export AWS_ACCESS_KEY_ID=[your-access-key-id]
export AWS_SECRET_ACCESS_KEY=[your-secret-access-key]
export PASSPHRASE=[your-gpg-passphrase]
GPG_KEY=[your-gpg-key]
# The source of your backup
SOURCE=/
# The destination
# Note that the bucket need not exist
# but does need to be unique amongst all
# Amazon S3 users. So, choose wisely.
DEST=s3+http://[your-bucket-name]/[backup-folder]
duplicity \
--encrypt-key=${GPG_KEY} \
--sign-key=${GPG_KEY} \
--include=/boot \
--include=/etc \
--include=/home \
--include=/root \
--include=/var/lib/mysql \
--exclude=/** \
${SOURCE} ${DEST}
# Reset the ENV variables. Don't need them sitting around
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export PASSPHRASE=
Does anyone else have experience with duplicity where they can improve this script and/or share best practices to help create a better script?
I am using a variation of that script for my backups. I recently made some changes to it, to try and save some money on my Amazon S3 bill (personal server, otherwise I wouldn't have minded so much).
The full script is here, but I'll list the changes I made below.
The first option makes sure that duplicity does a full backup regardless, every month. This is useful because it means I can remove to the latest full backup if I need to remove files from S3.
The second option decreases the number of files duplicity stores on S3, which decreases the number of requests made to S3, reducing the cost.
I also added the following after the backup has run. This removes any backups older than 6 months from S3.