You've probably seen the messages at the stackoverflow blog and on codinghorror:
blog.stackoverflow.com experienced 100% data loss at our hosting provider, CrystalTech.
We're working to restore it from backups ASAP!
Some of the stuff Jeff's doing is on Twitter. What would you be doing in a similar situation?
Firstly, make your own damn offsite backups. And test them. There are dirt cheap services that handle this. Or if you insist your expert hosting company should do this for you, make them restore an image from backup monthly just to keep them on their toes. But seriously, even just adding your own RSS feeds and setting the client to cache forever is a good first step.
Secondly, the Wayback Machine and Google cache can help. So can your own browser cache, so make copies of that before you thrash it. I hope you haven't been following the "clear your cache" step helpdesk likes to give.
Worst case (which they're probably doing now): reconstruct everything from sources as:
I would double up on my efforts to design and build a time machine. Once that project was complete I would go back to late last week and smack myself around the head until I:
In the absense of success in the TimeMachine project, all I would be able to do was wait patiently for the host to do what they could and hope that their backup arrangements were sufficient such that the data (or at least a recent copy there-of) can be restored in a short amount of time. I would then make sure that the above mentioned plans were made, implemented and regularly tested.
There isn't much to do. Find a new host, restore from backups.
If the host really wants to play ball and be nice they'd immediately freeze any usage of the disks in question and get them to a data recovery specialist... but in reality thats never gonna happen.
I tar and mysqldump all my hosting data to disk, then move it to a bigger RAID-type disk. I have never deleted a single copy of a backup. Yes, it takes up tons of disk space, but I value my user's data more than my own life! (Little exaggerated there, but I see it as my responsibility to make sure there is data integrity.)
I don't know why other companies don't do something similar - but I'll leave opinions to myself :).
If I were in your shoes, I would run a Linux (Ubuntu preferred) box and run programs like:
And mount my site as a local directory and use my own backup scripts to back up the data.
You select a new host (or stay if they have a really, really good reason for losing your data), and restore from your offsite backup ;) !
The first thing I would do is find out what 100% loss means. Even with disk failures, data can be extracted. Unless the whole server was melted in a fire, I'm sure something can be recovered.
You can modify your website, so when you create some post in blog it will automatically create this post on backup site (complete site mirror on other hosting, free hosting such as Google AppEngine, or your developer machine — but inaccessible for ordinal internet users). When your main hosting lose all data you can just copy it from backup hosting (and no changes in DB or site structure needed!). When your hosting is down you can just modify DNS record for your domain name with IP of backup site (and eventually change some access permissions on backup site).
And, ofcourse if you cannot afford backup hosting, you can use file system that have proved tools for recovery. I suggest NTFS for valuable "backups".
I use rsnapshot to back up my remote servers. It login in every night from the home office and backups everything that change during that day. It light on bandwidth and has a small foot print. On problem is it only works on UNIX-like systems.
I would think how wonderfully faster and better it would be to rebuild the second (or more) time around ...