Closed githubjsorg closed 11 months ago
please provide the complete output, while using --debug --verbose --debug-output.
Here you go. Complete Output.txt
sorry..have been a bit busy
can you try with 3.2-beta1?
It seems CloudBackup/Guests somehow already was created at the target but it doesnt have any snapshots. So it tries to do a full backup over it, but it already has data. (probably the mountpoint directories what where created).
It might help to run zfs-autobackup one time with the -F option (force), so it removes that data and creates the proper snapshots/data. It shouldnt delete any child datasets on the target.
Hi, I have the same/similar? issue on my Proxmox machine:
sudo zfs-autobackup backup2external backups14tb/zfs-autobackup --keep-source=0 --keep-target=1,1d1w,1m1y --decrypt --encrypt --clear-mountpoint --exclude-received -F --debug --verbose --debug-output
output-zfs-autobackup-v3.1.3.txt
Same with 3.1-beta1
#### Synchronising [Source] tank/encrptd: sending to backups14tb/zfs-autobackup/tank/encrptd [Target] backups14tb/zfs-autobackup/tank/encrptd@backup2external-20230522155723: receiving full ! [Target] STDERR > cannot receive new filesystem stream: destination 'backups14tb/zfs-autobackup/tank/encrptd' exists ! [Target] STDERR > must specify -F to overwrite it ! [Source] Command "zfs send --large-block --verbose --parsable tank/encrptd@backup2external-20230522155723" returned exit code 141 (valid codes: [0]) ! [Target] Command "zfs recv -u -x keylocation -x pbkdf2iters -x keyformat -x encryption -o canmount=noauto -v -s backups14tb/zfs-autobackup/tank/encrptd" returned exit code 1 (valid codes: [0]) ! [Source] tank/encrptd: FAILED: Last command returned error
I started from scratch and it works now.
I used true instead of child:
sudo zfs set autobackup:backup2external=true tank
It seems CloudBackup/Guests somehow already was created at the target but it doesnt have any snapshots. So it tries to do a full backup over it, but it already has data. (probably the mountpoint directories what where created).
It might help to run zfs-autobackup one time with the -F option (force), so it removes that data and creates the proper snapshots/data. It shouldnt delete any child datasets on the target.
CloudBackup/Guests was initially created by zfs-autobackup, so I don't know why there wouldn't be any snapshots or why it couldn't update it.
I will try -F when I get a chance, but that doesn't explain the initial issue.
feel free to open a new issue if it happens again
I get an odd error when updating an existing backup (see full log below):
Command used:
/usr/local/bin/zfs-autobackup --debug --verbose --exclude-unchanged --clear-refreservation offsite1 CloudBackup
--------------LOGS------------------
And yet there are existing backups of this pool.
~# zfs list -t snapshot | grep Guests CloudBackup/Guests/vm-100-disk-0@offsite1-20230323150312 18.2G - 175G - CloudBackup/Guests/vm-100-disk-0@offsite1-20230401193436 27.6M - 175G - CloudBackup/Guests/vm-100-disk-0@offsite1-20230401193646 0B - 175G - CloudBackup/Guests/vm-106-disk-0@offsite1-20230323150312 0B - 146G - CloudBackup/Guests/vm-200-disk-0@offsite1-20230323150312 0B - 146G - Guests@offsite1-20230401193436 18K - 28K - Guests@offsite1-20230426215621 13K - 29K - Guests/subvol-102-disk-0@offsite1-20230401193436 13.2M - 3.46G - Guests/subvol-102-disk-0@offsite1-20230426215621 10.7M - 3.46G - Guests/subvol-103-disk-0@offsite1-20230401193436 5.39M - 6.34G - Guests/subvol-103-disk-0@offsite1-20230426215621 3.72M - 6.34G - Guests/subvol-105-disk-0@offsite1-20230401193436 803M - 5.86G - Guests/subvol-105-disk-0@Install_VScanners 125M - 5.87G - Guests/subvol-105-disk-0@offsite1-20230426215621 308M - 6.98G - Guests/subvol-108-disk-0@offsite1-20230401193436 2.96M - 996M - Guests/subvol-108-disk-0@offsite1-20230426215621 2.96M - 996M - Guests/subvol-109-disk-0@offsite1-20230426215621 4.00M - 4.21G - Guests/subvol-110-disk-0@offsite1-20230401193436 12.7M - 2.54G - Guests/subvol-110-disk-0@offsite1-20230426215621 6.36M - 2.55G - Guests/subvol-111-disk-0@offsite1-20230426215621 2.19M - 713M - Guests/subvol-500-disk-0@offsite1-20230401193436 17.3M - 4.10G - Guests/subvol-500-disk-0@offsite1-20230426215621 7.88M - 4.11G - Guests/vm-100-disk-0@offsite1-20230323150312 6.35G - 60.6G - Guests/vm-100-disk-0@offsite1-20230401193436 11.0M - 60.6G - Guests/vm-100-disk-0@offsite1-20230401193646 11.0M - 60.6G - Guests/vm-100-disk-0@offsite1-20230426215621 1.91G - 60.6G - Guests/vm-106-disk-0@offsite1-20230323150312 1K - 50.4G - Guests/vm-106-disk-0@offsite1-20230401193436 1K - 50.4G - Guests/vm-106-disk-0@offsite1-20230426215621 0B - 50.4G - Guests/vm-200-disk-0@offsite1-20230323150312 1K - 50.5G - Guests/vm-200-disk-0@offsite1-20230401193436 1K - 50.5G - Guests/vm-200-disk-0@offsite1-20230426215621 0B - 50.5G -
This is the only pool I got this error on. All pools before this one had no issue during this backup. However, the process cancelled out on this error so the pools after this one were not backed up.
Some additional data: ~# zfs get quota,logicalused Guests CloudBackup/Guests NAME PROPERTY VALUE SOURCE CloudBackup/Guests quota none default CloudBackup/Guests logicalused 167G - Guests quota none default Guests logicalused 208G - ~# zfs list CloudBackup CloudBackup/Guests NAME USED AVAIL REFER MOUNTPOINT CloudBackup 36.0T 27.4T 279K none CloudBackup/Guests 485G 27.4T 279K none ~# zfs list Guests CloudBackup CloudBackup/Guests NAME USED AVAIL REFER MOUNTPOINT CloudBackup 36.0T 27.4T 279K none CloudBackup/Guests 485G 27.4T 279K none Guests 209G 15.5G 29K /Guests ~# zpool list Guests CloudBackup NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT CloudBackup 87.3T 49.4T 37.9T - - 0% 56% 1.00x ONLINE - Guests 232G 209G 22.7G - - 66% 90% 1.00x ONLINE -