psy0rz / zfs_autobackup

ZFS autobackup is used to periodicly backup ZFS filesystems to other locations. Easy to use and very reliable.
https://github.com/psy0rz/zfs_autobackup
GNU General Public License v3.0
583 stars 62 forks source link

cannot receive new filesystem stream: destination exists #191

Closed githubjsorg closed 11 months ago

githubjsorg commented 1 year ago

I get an odd error when updating an existing backup (see full log below):

! [Target] STDERR > cannot receive new filesystem stream: destination 'CloudBackup/Guests' exists
! [Target] STDERR > must specify -F to overwrite it

Command used: /usr/local/bin/zfs-autobackup --debug --verbose --exclude-unchanged --clear-refreservation offsite1 CloudBackup

--------------LOGS------------------

# [Target] CloudBackup: Checking if filesystem exists
# [Target] CMD    > (zfs list CloudBackup)
# [Source] zpool Guests: Getting zpool properties
# [Source] CMD    > (zpool get -H -p all Guests)
  [Source] Guests: sending to CloudBackup/Guests
# [Target] CloudBackup/Guests: Determining start snapshot
# [Target] CloudBackup/Guests: Checking if filesystem exists
# [Target] CMD    > (zfs list CloudBackup/Guests)
# [Target] CloudBackup/Guests: Getting snapshots
# [Target] CMD    > (zfs list -d 1 -r -t snapshot -H -o name CloudBackup/Guests)
# [Target] CloudBackup/Guests: Creating virtual target snapshots
# [Target] CloudBackup/Guests: Getting zfs properties
# [Target] CMD    > (zfs get -H -o property,value -p all CloudBackup/Guests)
# [Source] Guests@offsite1-20230401193436: Transfer snapshot to CloudBackup/Guests
  [Target] CloudBackup/Guests@offsite1-20230401193436: receiving full
# [Target] CloudBackup/Guests@offsite1-20230401193436: Enabled resume support
# [Target] CMD    > (zfs send --large-block --embed --verbose --parsable --props Guests@offsite1-20230401193436) | (zfs recv -u -x refreservation -v -s CloudBackup/Guests)
# [Source] STDERR > full        Guests@offsite1-20230401193436  14960
# [Source] STDERR > size        14960
! [Target] STDERR > cannot receive new filesystem stream: destination 'CloudBackup/Guests' exists
! [Target] STDERR > must specify -F to overwrite it
! [Source] STDERR > warning: cannot send 'Guests@offsite1-20230401193436': I/O error
! [Source] Command "zfs send --large-block --embed --verbose --parsable --props Guests@offsite1-20230401193436" returned exit code 1 (valid codes: [0])
! [Target] Command "zfs recv -u -x refreservation -v -s CloudBackup/Guests" returned exit code 1 (valid codes: [0])
! [Source] Guests: FAILED: Last command returned error
! Exception: Last command returned error
Traceback (most recent call last):
  File "/usr/local/bin/zfs-autobackup", line 10, in <module>
    sys.exit(cli())
  File "/usr/local/lib/python2.7/dist-packages/zfs_autobackup/__init__.py", line 9, in cli
    failed_datasets=zfs_autobackup.run()
  File "/usr/local/lib/python2.7/dist-packages/zfs_autobackup/ZfsAutobackup.py", line 597, in run
    target_node=target_node)
  File "/usr/local/lib/python2.7/dist-packages/zfs_autobackup/ZfsAutobackup.py", line 443, in sync_datasets
    zfs_compressed=self.args.zfs_compressed, force=self.args.force)
  File "/usr/local/lib/python2.7/dist-packages/zfs_autobackup/ZfsDataset.py", line 1087, in sync_snapshots
    recv_pipes=recv_pipes, zfs_compressed=zfs_compressed, force=force)
  File "/usr/local/lib/python2.7/dist-packages/zfs_autobackup/ZfsDataset.py", line 703, in transfer_snapshot
    set_properties=set_properties, ignore_exit_code=ignore_recv_exit_code, recv_pipes=recv_pipes, force=force)
  File "/usr/local/lib/python2.7/dist-packages/zfs_autobackup/ZfsDataset.py", line 645, in recv_pipe
    self.zfs_node.run(cmd, inp=pipe, valid_exitcodes=valid_exitcodes)
  File "/usr/local/lib/python2.7/dist-packages/zfs_autobackup/ExecuteNode.py", line 156, in run
    raise(ExecuteError("Last command returned error"))
zfs_autobackup.ExecuteNode.ExecuteError: Last command returned error

And yet there are existing backups of this pool.

~# zfs list -t snapshot | grep Guests CloudBackup/Guests/vm-100-disk-0@offsite1-20230323150312 18.2G - 175G - CloudBackup/Guests/vm-100-disk-0@offsite1-20230401193436 27.6M - 175G - CloudBackup/Guests/vm-100-disk-0@offsite1-20230401193646 0B - 175G - CloudBackup/Guests/vm-106-disk-0@offsite1-20230323150312 0B - 146G - CloudBackup/Guests/vm-200-disk-0@offsite1-20230323150312 0B - 146G - Guests@offsite1-20230401193436 18K - 28K - Guests@offsite1-20230426215621 13K - 29K - Guests/subvol-102-disk-0@offsite1-20230401193436 13.2M - 3.46G - Guests/subvol-102-disk-0@offsite1-20230426215621 10.7M - 3.46G - Guests/subvol-103-disk-0@offsite1-20230401193436 5.39M - 6.34G - Guests/subvol-103-disk-0@offsite1-20230426215621 3.72M - 6.34G - Guests/subvol-105-disk-0@offsite1-20230401193436 803M - 5.86G - Guests/subvol-105-disk-0@Install_VScanners 125M - 5.87G - Guests/subvol-105-disk-0@offsite1-20230426215621 308M - 6.98G - Guests/subvol-108-disk-0@offsite1-20230401193436 2.96M - 996M - Guests/subvol-108-disk-0@offsite1-20230426215621 2.96M - 996M - Guests/subvol-109-disk-0@offsite1-20230426215621 4.00M - 4.21G - Guests/subvol-110-disk-0@offsite1-20230401193436 12.7M - 2.54G - Guests/subvol-110-disk-0@offsite1-20230426215621 6.36M - 2.55G - Guests/subvol-111-disk-0@offsite1-20230426215621 2.19M - 713M - Guests/subvol-500-disk-0@offsite1-20230401193436 17.3M - 4.10G - Guests/subvol-500-disk-0@offsite1-20230426215621 7.88M - 4.11G - Guests/vm-100-disk-0@offsite1-20230323150312 6.35G - 60.6G - Guests/vm-100-disk-0@offsite1-20230401193436 11.0M - 60.6G - Guests/vm-100-disk-0@offsite1-20230401193646 11.0M - 60.6G - Guests/vm-100-disk-0@offsite1-20230426215621 1.91G - 60.6G - Guests/vm-106-disk-0@offsite1-20230323150312 1K - 50.4G - Guests/vm-106-disk-0@offsite1-20230401193436 1K - 50.4G - Guests/vm-106-disk-0@offsite1-20230426215621 0B - 50.4G - Guests/vm-200-disk-0@offsite1-20230323150312 1K - 50.5G - Guests/vm-200-disk-0@offsite1-20230401193436 1K - 50.5G - Guests/vm-200-disk-0@offsite1-20230426215621 0B - 50.5G -

This is the only pool I got this error on. All pools before this one had no issue during this backup. However, the process cancelled out on this error so the pools after this one were not backed up.

Some additional data: ~# zfs get quota,logicalused Guests CloudBackup/Guests NAME PROPERTY VALUE SOURCE CloudBackup/Guests quota none default CloudBackup/Guests logicalused 167G - Guests quota none default Guests logicalused 208G - ~# zfs list CloudBackup CloudBackup/Guests NAME USED AVAIL REFER MOUNTPOINT CloudBackup 36.0T 27.4T 279K none CloudBackup/Guests 485G 27.4T 279K none ~# zfs list Guests CloudBackup CloudBackup/Guests NAME USED AVAIL REFER MOUNTPOINT CloudBackup 36.0T 27.4T 279K none CloudBackup/Guests 485G 27.4T 279K none Guests 209G 15.5G 29K /Guests ~# zpool list Guests CloudBackup NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT CloudBackup 87.3T 49.4T 37.9T - - 0% 56% 1.00x ONLINE - Guests 232G 209G 22.7G - - 66% 90% 1.00x ONLINE -

psy0rz commented 1 year ago

please provide the complete output, while using --debug --verbose --debug-output.

githubjsorg commented 1 year ago

Here you go. Complete Output.txt

psy0rz commented 1 year ago

sorry..have been a bit busy

psy0rz commented 1 year ago

can you try with 3.2-beta1?

psy0rz commented 1 year ago

It seems CloudBackup/Guests somehow already was created at the target but it doesnt have any snapshots. So it tries to do a full backup over it, but it already has data. (probably the mountpoint directories what where created).

It might help to run zfs-autobackup one time with the -F option (force), so it removes that data and creates the proper snapshots/data. It shouldnt delete any child datasets on the target.

florian-obradovic commented 1 year ago

Hi, I have the same/similar? issue on my Proxmox machine: CleanShot 2023-05-23 at 11 34 02@2x

sudo zfs-autobackup backup2external backups14tb/zfs-autobackup --keep-source=0 --keep-target=1,1d1w,1m1y --decrypt --encrypt --clear-mountpoint --exclude-received -F --debug --verbose --debug-output output-zfs-autobackup-v3.1.3.txt

florian-obradovic commented 1 year ago

Same with 3.1-beta1 #### Synchronising [Source] tank/encrptd: sending to backups14tb/zfs-autobackup/tank/encrptd [Target] backups14tb/zfs-autobackup/tank/encrptd@backup2external-20230522155723: receiving full ! [Target] STDERR > cannot receive new filesystem stream: destination 'backups14tb/zfs-autobackup/tank/encrptd' exists ! [Target] STDERR > must specify -F to overwrite it ! [Source] Command "zfs send --large-block --verbose --parsable tank/encrptd@backup2external-20230522155723" returned exit code 141 (valid codes: [0]) ! [Target] Command "zfs recv -u -x keylocation -x pbkdf2iters -x keyformat -x encryption -o canmount=noauto -v -s backups14tb/zfs-autobackup/tank/encrptd" returned exit code 1 (valid codes: [0]) ! [Source] tank/encrptd: FAILED: Last command returned error

florian-obradovic commented 1 year ago

I started from scratch and it works now. I used true instead of child: sudo zfs set autobackup:backup2external=true tank

githubjsorg commented 1 year ago

It seems CloudBackup/Guests somehow already was created at the target but it doesnt have any snapshots. So it tries to do a full backup over it, but it already has data. (probably the mountpoint directories what where created).

It might help to run zfs-autobackup one time with the -F option (force), so it removes that data and creates the proper snapshots/data. It shouldnt delete any child datasets on the target.

CloudBackup/Guests was initially created by zfs-autobackup, so I don't know why there wouldn't be any snapshots or why it couldn't update it.

I will try -F when I get a chance, but that doesn't explain the initial issue.

psy0rz commented 11 months ago

feel free to open a new issue if it happens again