I have a Ubuntu server hosting a PHP5 / MySQL webapp. I also have another server (offsite) with a directory just for backups of my webapp server. Currently, I have a simple cron job which performs the following:
- Copies /var/www to local backup directory. Command:
cp -ua /var/www /backup/www
- Performs mysqldump of information_schema and my app's database and copies those files to the local backup directory.
- Mounts the backup server as an sshfs volume.
- Copies all files from the local backup directory to the sshfs volume.
Questions:
- Is this an efficient solution or would a dedicated backup program work better/faster?
- Am I covering everything I need? For example: are there configuration files that I should backup too (PHP, Apache2, MySQL)?
- If there is a dedicated backup program that would serve my needs better, which would you recommend?
Thanks for any advice
I'd recommend something that can do incremental backups, because that way you can go back to files that were changed several days ago. We use dirvish for this (it's available from the ubuntu repos) - basically it's a wrapper for rsync written in perl that makes it easier to setup a system to run regular incremental backups, and then expire them based on given rules.
On our system we keep backups for 7 days, but keep the first backup of the week for 1 month, and the first backup of the month for one year, so usually you can find files that were modified and deleted some time ago. For things that are small, but important, we have much longer expiry times.
As for what you should backup, imo if you have the space back up everything unless you're sure you don't need to back up. With incremental rsync backups you're only copying over the files that have changed since the last backup - I'd rather have an extra 3gb of static data belonging to the operating system on my backup drive than find it was missing one important file when I needed to recover it.
At a minimum I'd backup the entire contents of /etc/, and most of /var depending on what stuff was running on the server.
If your mysql database is fairly small then your current approach is probably acceptable, but since you need to lock the database to generate the dump this isn't practical on large databases. Because of this lots of people run a slave mysql server, which is just a read only copy of the main one on the backup server, and use this to generate backups.
The mysql documentation on how to do this is fairly good, I worked through it the first time in an evening, without much previous mysql experience.
We have our slave mysql server tied into dirvish, so each day it generates a dump of each individual database, as well as an entire copy of /var/lib/mysql (it's stops the mysql server before doing this)
See convenient & easy way backing up mySQL & SVN on Ubuntu machine, which pretty much sums up what you need.
In short;
backupninja
andrdiff-backup
. Look at what's available, what suits your needs and pick that./etc/
(server configuration), all your web stuff, RCS-data, database. I usually copy the home directories as well, because I usually have a bunch of nice small scripts that really don't fit anywhere else and if I have to restore a server it is really nice to have a 'known' setup of the shell, preferred editor etc.i have my servers backup everything i needed backed up to a tar, then it gets copied to a external backup. the script will also delete the backups from 7 days back. i forgot where i found it, but i didnt write it.