I have a failing hard disk (let's call it sda
) which contains, among other things, a 1.5 TB partition (let's call it sda3
). There is another disk (sdb
), which has a 1.5 TB partition (sdb1
) as well. Both used to be part of an mdadm level 1 RAID using metadata version 1.2. Inside this RAID partition (let's call id md5
), there is a LUKS encryption container (let's call it md5_uncrypted
). This container should contain an ext4 partition.
At some place in time around August 10, 2012, I somehow restarted the RAID array with sdb1
missing and didn't even notice that. When I wanted to replace the RAID yesterday (three months later), I started copying data from sdb1
until I realized that it was out of date. So I took a look at the old sda3
. By mistake, I ran mdadm --create
instead of mdadm --assemble
to restart md5
with only sda3
available. Accordingly, I ignored all warnings and let mdadm --create
continue. cryptsetup
didn't like the content of the new RAID. I didn't actually think mdadm --create
would corrupt data if the same metadata version is used? Well, apparently it did.
I compared the first 20 MB of sda3
and sdb1
and noticed that they are equal starting at about 8 MB. So I copied the first 8 MB of sdb1
to sda3
(I have a backup of the old first 20 MB of sda3
) and tried to assemble md5
(with only a single drive, sda3
). Unfortunately, this gave me an error:
failed to add /dev/sdb1: Invalid argument
I also tried using the LUKS header from sdb1
on a freshly mdadm --create
d sda3
, which cryptsetup
happily did (of course), but it contained garbage.
My question is: Is there any chance to restore at least some of the data from sda3
? Since I have the state from three months ago, everything helps, even just a list of files, or a list of files with modification dates.
Edit:
# mdadm --examine /dev/sdb1
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 76a25475:70aa881c:dd30cc93:bbae03b7
Name : ubuntu:0
Creation Time : Fri Mar 16 20:52:16 2012
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 2930272256 (1397.26 GiB 1500.30 GB)
Array Size : 1465129848 (1397.26 GiB 1500.29 GB)
Used Dev Size : 2930259696 (1397.26 GiB 1500.29 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : b9012482:afa502cf:7794f4fb:2a0da196
Update Time : Wed Nov 21 20:51:51 2012
Checksum : 4e54a07 - correct
Events : 15003
Device Role : Active device 1
Array State : .A ('A' == active, '.' == missing)
# mdadm --examine /dev/sda3
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 54ea031e:42367512:b6a8675b:91f2cb6f
Name : willow:5 (local to host willow)
Creation Time : Wed Nov 21 18:03:35 2012
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 2929999872 (1397.13 GiB 1500.16 GB)
Array Size : 1464999744 (1397.13 GiB 1500.16 GB)
Used Dev Size : 2929999488 (1397.13 GiB 1500.16 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : active
Device UUID : 93c70c36:3cc540a5:13817695:bd4f327c
Update Time : Wed Nov 21 18:03:35 2012
Checksum : 321ddb3e - correct
Events : 0
Device Role : Active device 1
Array State : .A ('A' == active, '.' == missing)
First if you have a spare HDD I would strongly advise to mirror sda3 and just work with the mirror.
mdadm --create with the same options shouldn't corrupt data unless defaults of unspecified options are changed between the version which initially created the array and the current version.
Did you compare the superblocks on sdb1 and sda3 with mdadm --examine?
Unless you've added/changed/removed keys the luks header should be identical. Have you tried to restore luksHeaderBackup from the sdb1 array to the created array on sda3?
Different offsets of the luks header {'L','U','K','S',0xba,0xbe} on sdb1 and sda3 would explain garbage in luks volume.