I use rackspace cloud sites and don't have accsess to SSH or RSYNC on their servers, but I do access to SSHFS.
I would like to be able to backup my cloud site to my local Ubuntu server, which has SSH, RSYNC etc.
So far, I'm thinking the best way to do this, is to mount the site on the local server using:
sshfs [email protected]:/path-to-site/ ~/Sites_Mounted/site-name/ -o reconnect,cache=no,compression=yes,ServerAliveInterval=15
Reconnect - so the connection will reconnect if it drops
Cache=no - because we want live backups not old cached files
Compression - to minimise bandwidth usage
ServerAliceInterval - as SSHFS drops out after a long time and crashes
I was then thinking about using a RSYNC command to copy the mounded sites files to a backup directory on the local server, then when the next backup is due 12hrs later, copy/RSYNC the backup dir to a new backup dir (with different name i.e. 2012-01-01-sitename) then using a RSYNC comand to copy only the changes on the remote server to the new backup dir that contains the old/previous backup.
My questions are, will this approch work? If so, what commands would i need to use and would it be posible to include all those in a single .sh script that I could run?
Or is there a simpler, more efficient or better way to do this.
(I think I can zip the entire site on the server and download that but this seem a bit resource heavy)
I've had to clone several cloud based servers, here's my approach:
Stop any running services that you can. If that's not an option, you'll need to do db dumps and backups separately (i.e. anything that uses mysql, redis, solr, etc.)
create a directory in the root i.e. /x
mount /dev/sda1 (or xvda1 or whatever your root system partition is) on /x (as you can have one device mounted to two different points at the same time.) The value here is that you won't get errors for the devices in /proc, etc. If you're using lvm, a snapshot works great for this too.
At this point, you have a few options. If your server has enough disk space, simply make a directory /y and do
If you don't, then you can shoot it to another node via ssh:
Either route you go, you can then download the tarball.
Last and probably easiest, but not ideal in my mind, is to simply rsync the /x/ directory to your local machine.
Whatever route you go, if you have large databases or kruft you don't need, you save time by excluding them from the tar process (simply copying a running db can cause the db copy to be corrupted.)