Closed ConsoleXXVII closed 1 year ago
I can confirm the problem is hardware independent, and it looks more like a bug of how everything is interpreted.
For btrfs scrub start
it reports the last physical location it scrubbed. This includes all the unused space.
While for btrfs scrub status
it reports the real sectors scrubbed, which excludes all the unused space.
This means for a filesystem like the following:
$ sudo btrfs fi df /mnt/btrfs/
Data, single: total=1.01GiB, used=9.06MiB
System, DUP: total=40.00MiB, used=64.00KiB
Metadata, DUP: total=256.00MiB, used=1.38MiB
GlobalReserve, single: total=22.00MiB, used=0.00B
The btrfs scrub start -B
would report something like 1.01GiB + 40MiB * 2 + 256MiB * 2
, which is around 1.5GiB:
$ sudo btrfs scrub start -B /mnt/btrfs/
scrub done for c107ef62-0a5d-4fd7-a119-b88f38b8e084
Scrub started: Mon Jun 5 07:54:07 2023
Status: finished
Duration: 0:00:00
Total to scrub: 1.52GiB
Rate: 0.00B/s
Error summary: no errors found
But for btrfs scrub status
it would only report 9.06MiB + 64KiB * 2 + 1.38MiB *2
, which is just 10MiB strong:
$ sudo btrfs scrub status /mnt/btrfs/
UUID: c107ef62-0a5d-4fd7-a119-b88f38b8e084
Scrub started: Mon Jun 5 07:54:07 2023
Status: finished
Duration: 0:00:00
Total to scrub: 12.00MiB
Rate: 0.00B/s
Error summary: no errors found
I believe we need to update the progs to unify the output, mostly to follow the scrub status
output.
Fix added to devel, thanks.
Thanks
The final report given by the command
btrfs scrub start -B /mount_point
does show a wrong value in the fieldTotal to scrub:
. Usingbtrfs scrub status /mount_point
does report the correct value.Exemple:
System (RaspberryPi OS, arm64):
This bug also exist in the kernel 5.15 arm64, with btrfs-progs 5.10.