When I run btrfs check --repair
to try to fix the following error, I got the following output:
$ sudo btrfs check --repair /dev/sda1
enabling repair mode
WARNING:
Do not use --repair unless you are advised to do so by a developer
or an experienced user, and then only after having accepted that no
fsck can successfully repair all types of filesystem corruption. Eg.
some software or hardware bugs can fatally damage a volume.
The operation will start in 10 seconds.
Use Ctrl-C to stop it.
10 9 8 7 6 5 4 3 2 1
Starting repair.
Opening filesystem to check...
parent transid verify failed on 111692120064 wanted 360498 found 364003
parent transid verify failed on 111692120064 wanted 360498 found 364003
Ignoring transid failure
parent transid verify failed on 29671424 wanted 360534 found 364003
parent transid verify failed on 29671424 wanted 360534 found 364003
Ignoring transid failure
parent transid verify failed on 34750464 wanted 363754 found 364003
parent transid verify failed on 34750464 wanted 363754 found 364003
Ignoring transid failure
Checking filesystem on /dev/sda1
UUID: e75d569a-400c-4076-8b9f-903a7a1f4f03
[1/7] checking root items
parent transid verify failed on 111692120064 wanted 360498 found 364003
Ignoring transid failure
parent transid verify failed on 29671424 wanted 360534 found 364003
Ignoring transid failure
parent transid verify failed on 34750464 wanted 363754 found 364003
Ignoring transid failure
Fixed 0 roots.
[2/7] checking extents
parent transid verify failed on 29671424 wanted 360534 found 364003
Ignoring transid failure
parent transid verify failed on 34750464 wanted 363754 found 364003
Ignoring transid failure
parent transid verify failed on 111692120064 wanted 360498 found 364003
Ignoring transid failure
extent back ref already exists for 21790720 parent 0 root 10
extent back ref already exists for 21839872 parent 0 root 10
extent back ref already exists for 21856256 parent 0 root 10
extent back ref already exists for 22003712 parent 0 root 10
extent back ref already exists for 22036480 parent 0 root 10
extent back ref already exists for 23396352 parent 0 root 10
extent back ref already exists for 23412736 parent 0 root 10
parent transid verify failed on 37912576 wanted 351287 found 364003
parent transid verify failed on 37912576 wanted 351287 found 364003
Ignoring transid failure
parent transid verify failed on 38420480 wanted 351287 found 364003
parent transid verify failed on 38420480 wanted 351287 found 364003
Ignoring transid failure
parent transid verify failed on 38486016 wanted 351287 found 364003
parent transid verify failed on 38486016 wanted 351287 found 364003
Ignoring transid failure
parent transid verify failed on 46858240 wanted 351287 found 364003
parent transid verify failed on 46858240 wanted 351287 found 364003
Ignoring transid failure
parent transid verify failed on 62865408 wanted 363983 found 364003
parent transid verify failed on 62865408 wanted 363983 found 364003
Ignoring transid failure
parent transid verify failed on 67059712 wanted 363983 found 364003
parent transid verify failed on 67059712 wanted 363983 found 364003
Ignoring transid failure
parent transid verify failed on 71417856 wanted 360532 found 364003
parent transid verify failed on 71417856 wanted 360532 found 364003
Ignoring transid failure
Chunk[256, 228, 182558130176]: length(1073741824), offset(182558130176), type(1) is not found in block group
Chunk[256, 228, 192221806592]: length(1073741824), offset(192221806592), type(1) is not found in block group
Chunk[256, 228, 279194894336]: length(1073741824), offset(279194894336), type(1) is not found in block group
Chunk[256, 228, 299595988992]: length(1073741824), offset(299595988992), type(1) is not found in block group
well this shouldn't happen, extent record overlaps but is metadata? [21790720, 16384]
Aborted
the Aborted
is triggered after 1 minute after the prompt, so I assume that btrfs check
was asking me to make a hard decision. The problem is: what kind of decision do I have and how do I inform the program to conduct the fix?
I'm using ubuntu 22.04, the btrfs is on the second HDD slot without being mounted.
Thanks a lot for your help!
btrfs check
isn't asking you to make a decision, but telling you it cannot repair the damage it found. Your best course of action is to make sure you have good and current backups, and then reformat the partition.