My idea was that (using loopback devices) it works like this
- Create the raid array
sudo mkfs.btrfs -m raid1 -d raid1 /dev/loop1 /dev/loop2
- You mount them
sudo mount /dev/loop1 /mnt
and mark themtouch goodcondition
- You unmount and simulate disk failure (remove disk or delete loopback device
loop2
in my case) - You mount degraded
-o degraded
and mark againtouch degraded
- You add the bad disk again
sudo btrfs dev add /dev/loop2
- You rebalance
sudo btrfs fi ba /mnt
And Raid 1 should work again. But that's not the case. sudo btrfs fi show
:
Total devices 3 FS bytes used 28.00KB
devid 3 size 4.00GB used 264.00MB path /dev/loop1
devid 2 size 4.00GB used 272.00MB path /dev/loop2
*** Some devices missing
The file degraded
lives on loop1
but not on loop2
when loop2
is mounted in degraded mode.
Why is that?
In this situation, you need to do two things. First, you need to indicate to btrfs that the missing device is permanently gone:
btrfs dev delete missing /mnt
(missing
is a keyword indicating any missing devices). Second, you need to rebalance to ensure that the data is properly replicated:btrfs fi balance /mnt
.It looks like you added a third device and so the original second is still missing. I guess you need to remove the missing device before adding the new one. The brtfs mailing list might also be a better place to ask this question.