I'm trying to set up a Linux development environment so I can safely make changes to my website without breaking the live site.
Linode hosts my live site. A simple solution would be to host my development server on Linode as well, but I want to avoid doubling my hosting costs.
The cheapest way I see is to use Vagrant on my Windows workstation to host my development environment.
After I attempt to restore the backup to Vagrant and reboot the VM, I can no longer ssh into the Vagrant host.
It's probably because by restoring the backup I overwrite some special Vagrant configuration, but I'm not sure how to avoid that.
How do I make this approach work? If my approach is fundamentally wrong, can you suggest an alternative?
Creating the backup
On the Linode I used these commands to create a compressed copy of the entire filesystem, while ignoring things that shouldn't be included in the backup:
$ sudo rsync -ahvz --exclude={/dev/*,/proc/*,/sys/*,/tmp/*,/run/*,/mnt/*,/backup/*} /* /backup/2
$ sudo tar -czf /backup/2.gz /backup/2
The backup file is called 2.gz
because this is thesecond backup. The first backup is called 1.gz
.
I use WinSCP to copy the backup file to my Windows workstation.
Setting up the Vagrant host
I need a Vagrant box that matches my Linode operating system (Ubuntu 12.04.3 LTS, kernel 3.9.3). I selected the closet match from vagrantbox.es:
Ubuntu Server Precise 12.04.3 amd64
Kernel is ready for Docker (Docker not included)
On my workstation I ran these commands to add the box and initialize and boot an instance:
$ vagrant box add ubuntu-precise http://nitron-vagrant.s3-website-us-east-1.amazonaws.com/vagrant_ubuntu_12.04.3_amd64_virtualbox.box
$ mkdir linode-test
$ cd linode-test
$ vagrant init ubuntu-precise
$ vagrant up
Now Vagrant is running a machine with SSH on port 2222.
The operating system version is the same. The kernel version is 3.8.0. Sounds close enough.
Restoring the backup
With WinSCP I copied the backup file 2.gz
to /home/vagrant/2.gz
on the Vagrant box.
With PuTTY I connected via ssh to my new Vagrant box:
On the box move the backup to the filesystem root.
$ sudo mv 2.gz /
Extract the archive to the filesystem root:
$ sudo tar -xvpz -f 2.gz -C / --strip-components=2
(I discovered I need to use strip components because all files in the archive have the prefix backup/2/
. I'll fix this for the next backup.)
After the tar command completes, I log out of the box.
Testing the backup
When I try to log in again, it doesn't let me log in as vagrant with a password any more.
It does let me log in as iain, my user on the live Linode, with a password. That surprised me because I disabled password authentication on my live Linode. I figured that I have to restart the ssh service for the change to take effect.
Instead of restarting just ssh, I chose to restart the whole system.
Now I can't even get to the login screen. PuTTY says "connection refused" when I try to connect.
What went wrong?
The reason why you couldn't log in as vagrant any more is because you backed up /etc which contains shadow. Since the shadow file from linode doesn't contain an entry for the vagrant user, it won't allow you to login with that credentials. You could log in using your linode user because the credentials are in the /etc/shadow file, but the ssh daemon was using the default settings which allow password authentication for everyone. If everything was ok, you could just reload the ssh service to pick up the new configuration (the one from your backup that disables password authentication).
However something must have gone wrong during the restore. From the info provided, I can only make assumptions as to what, and I'd like to avoid that. However, if you open the virtualbox management window with the vagrant box shut down, you will be able to start the virtual machine normally. This will allow you to see any errors that are presented in the console. If no errors are present, it will at least allow you to log in using the console (as opposed to SSH). You can use your normal account to get access and look around to see what's wrong. Something in /var/log will point you towards the right direction (most probably /var/log/syslog).
After a bit of tweaking, you'll probably get your approach to mostly work, but I suggest a different approach altogether.
To keep the general configuration of the live and dev servers in sync, either use puppet to deploy, or just be very careful. For synchronisation of your web code and uploaded files, rsync is OK, define what you want to include and keep it narrow. Just the app, not the whole system. Rather than using rsync, you might want to use git or another Revision Control System. Check in changes from dev, then check them out to live. Somewhere down the track you could automate this with a Continuous deployment system.
You will probably also need a way to synchronise database content. Don't count on just copying the files mysql stores its data in, unless you're prepared to take the database engines down at both ends while you do it, or you have some sort of snapshot mechanism at the file system level (eg using LVM).
You might want to combine the synchronisation of data updates from live to dev with your backup system. ie take backups of live, and recover them to dev. This is good because you are constantly verifying your backups, but if you're backing up daily, then recovering yesterday's backup may sometimes be inconveniently old.
Maybe its overkill. It probably feels like it a bit more than it really is right now though. At least understand the model as something to move to over time.