Open WRMSRwasTaken opened 3 years ago
Running into the same issue here, with a simple btrfs partition which was backed up by clonezilla along with a disk (which contains an ESP partition, ext4 partition and the btrfs partition, without any RAID or LVM). Clonezilla does not print any error message on the screen during the backup and restore process.
Also have enabled ZSTD compression (-o compress=zstd:1
) before being backed up.
The broken filesystem after clonezilla image restoration keep generating following messages in almost every btrfs check
(even the dangerous --repair --force
with --init-csum-tree
and --init-extent-tree
), btrfs restore
:
(omitting rescue options hint...)
checksum verify failed on 1081344 wanted 0x00000000 found 0xb6bde3e4
checksum verify failed on 1081344 wanted 0x00000000 found 0xb6bde3e4
bad tree block 1081344, bytenr mismatch, want=1081344, have=0
ERROR: cannot read chunk root
ERROR: cannot open file system
and kernel log (dmesg) when trying to mount (even with all recovery options):
[ 2647.659360] BTRFS info (device sda3): flagging fs with big metadata feature
[ 2647.659364] BTRFS info (device sda3): use zstd compression, level 1
[ 2647.659365] BTRFS info (device sda3): disk space caching is enabled
[ 2647.659366] BTRFS info (device sda3): has skinny extents
[ 2647.660069] BTRFS error (device sda3): bad tree block start, want 1081344 have 0
[ 2647.660074] BTRFS error (device sda3): failed to read chunk root
[ 2647.660236] BTRFS error (device sda3): open_ctree failed
Tried btrfs rescue chunk-recover
with success, but still getting above message after the "successful" rescue.
(omitting chunk information ......)
Unrecoverable Chunks:
Total Chunks: 277
Recoverable: 277
Unrecoverable: 0
Orphan Block Groups:
Orphan Device Extents:
Check chunks successfully with no orphans
Chunk tree recovered successfully
./btrfs rescue super-recover /dev/sda3
returns:
All supers are valid, no need to recover
Was using clonezilla-live-20220620-jammy-amd64.iso
for backup and restore the disk; btrfs-progs
5.19 and 5.16.2 have the same behavior described above.
Same here. Clone and recover run with no errors, but resulting filesystem is broken and won't mount.
Also tested recovering on top of the same partition, and it worked fine. Does not look like partclone is breaking anything, it is just not copying something it should.
Same issue here with clonezilla-live-3.1.0-22-amd64 using luks as well. Anyone have updates/ideas on how to restore/access your files? I sadly don't have another backup as the clonezilla one was my backup.
After running partclone.btrfs from a raw md array to md -> bcache -> dm-crypt via latest Arch Linux ISO liveboot, I am unable to mount the newly cloned filesystem. Partclone finished without any errors.
I was running the command
partclone.btrfs -b -s /dev/md126 -o /dev/mapper/cryptroot
Error in a remote KVM window is:
Kernel version is 5.13.13, partclone.btrfs version is v0.3.17.
Original filesystem works without issue, scrubbing the fs before the clone also yielded no errors. Filesystem is ZSTD compressed, if this matters.
I am sorry if this is missing information, unfortunately this is running via web KVM on a remote server, so I might not be able to provide all information. For now, I had to cancel the maintenance and boot the original filesystem / device again which still works without any issues.
€: This might be related to https://github.com/Thomas-Tsai/partclone/issues/158, but I am not running quotas at all.
€: I was not able to run
btrfsck --repair --force
on the block device. It simply refused with the same error message as btrfstune above.