I am trying to recover a (MySQL) database from a crashed disk. There are a number of recent dumps, which are corrupted bz2 files. Since the database does not change often, the dumps should be nearly identical. bzip2recover
recovered about 70-80% of the chunks from the files, so most if not all of the data could be recovered by finding the overlaps in the files and joining them together. For example:
dump1: |-----------------|xxxxxxxxxxxxxxxx|------------------|
dump2: |-------------|----------------|xxxxxxxxxxxxxxxxxxxxxx|
dump3: |xxxxxxxxxxxxxxxxxxxxxx|---------------|xxxxxxxxxxxxxx|
here I can detect that the first chunk in dump1 is continued by the second one in dump2, which is continued by the second in dump3, which is continued by the third in dump1. By joining these four files, I have recovered the data.
The problem is that there are thousands of files (I have ten dumps of ~400 1M chunks each). Is there a tool which could automate this process, or at least parts of it (like a linux command checking for the longest overlap between the end of one file and the start of another)?
I needed this exact same thing. I came up with this surprisingly fast python code (it joined two 2GB files with an 800MB overlap in 30 seconds.) Adjust overlap_size as necessary for your chunks. It should be as long as possible but less than the real overlap size.
Usage:
I don't have a tool for you to do the job completely, but you can use tools like:
This will give you the list of different bytes and their offsets. The overlapping is where there is no offset printed by
cmp
.Also, you can use
dd
command to copy part of a dump and append it to another dump.You can try writing your own script that use such tools or you can write a small C program that compares these files and copy the needed parts.
I hope you find those ideas helpful.
Traditionally, this would be
diff
. It will produce the "difference" of two given text files as the output, along with some control information (what has been added, what has been removed, which lines to check). Thepatch
command is able to reverse the process.In theory, you should be able to use
diff
on your different chunks, work a bit on its output (like removing the commands for line deletion) and feed it topatch
:Note that if you have very large input files,
diff
will need a vast amount of memory.I think you're just going to have to write such a tool yourself.
Start out with the largest file and copy it into memory as your image.
Then run through all the files, one by one, looking for an overlap with either the first or last chunk of the current memory image. If you find an overlap, extend the memory image.
Repeat until you take a pass through all the files without adding any bytes. Then write the memory image to a file.