psy0rz / zfs_autobackup

ZFS autobackup is used to periodicly backup ZFS filesystems to other locations. Easy to use and very reliable.
https://github.com/psy0rz/zfs_autobackup
GNU General Public License v3.0
601 stars 63 forks source link

FAILED: 'datetime.datetime' object has no attribute 'timestamp' #248

Closed githubjsorg closed 6 months ago

githubjsorg commented 7 months ago
# /usr/local/bin/zfs-autobackup --debug --verbose --clear-mountpoint --keep-source=2 --exclude-unchanged=50 --clear-refreservation offsite1 CloudBackup
  zfs-autobackup v3.3-beta.2 - (c)2022 E.H.Eefting (edwin@datux.nl)

  NOTE: Source and target are on the same host, excluding target-path from selection.

  Current time               : 2024-03-15 13:18:53
  Selecting dataset property : autobackup:offsite1
  Snapshot format            : offsite1-%Y%m%d%H%M%S
  Timezone                   : Local
  Hold name                  : zfs_autobackup:offsite1

  #### Source settings
  [Source] Keep the last 2 snapshots.

  #### Selecting
# [Source] Getting selected datasets
# [Source] CMD    > (zfs get -t volume,filesystem -o name,value,source -H autobackup:offsite1)
# [Source] CloudBackup/rpool: Excluded (path in exclude list)
# [Source] CloudBackup/rpool/ROOT: Excluded (path in exclude list)
# [Source] CloudBackup/rpool/ROOT/pve-1: Excluded (path in exclude list)
# [Source] CloudBackup/rpool/data: Excluded (path in exclude list)
# [Source] CloudBackup/rpool/data/subvol-101-disk-0: Excluded (path in exclude list)
# [Source] CloudBackup/rpool/data/vm-107-disk-0: Excluded (path in exclude list)
# [Source] Guests_NVME/subvol-109-disk-0: Checking if dataset is changed
# [Source] Guests_NVME/subvol-109-disk-0: Getting zfs properties
# [Source] CMD    > (zfs get -H -o property,value -p all Guests_NVME/subvol-109-disk-0)
  [Source] Guests_NVME/subvol-109-disk-0: Selected
# [Source] Guests_NVME/subvol-111-disk-0: Checking if dataset is changed
# [Source] Guests_NVME/subvol-111-disk-0: Getting zfs properties
# [Source] CMD    > (zfs get -H -o property,value -p all Guests_NVME/subvol-111-disk-0)
  [Source] Guests_NVME/subvol-111-disk-0: Selected
# [Source] Guests_NVME/vm-300-disk-0: Checking if dataset is changed
# [Source] Guests_NVME/vm-300-disk-0: Getting zfs properties
# [Source] CMD    > (zfs get -H -o property,value -p all Guests_NVME/vm-300-disk-0)
  [Source] Guests_NVME/vm-300-disk-0: Excluded (by --exclude-unchanged)
# [Source] Guests_NVME/vm-300-disk-1: Checking if dataset is changed
# [Source] Guests_NVME/vm-300-disk-1: Getting zfs properties
# [Source] CMD    > (zfs get -H -o property,value -p all Guests_NVME/vm-300-disk-1)
  [Source] Guests_NVME/vm-300-disk-1: Excluded (by --exclude-unchanged)
# [Source] Guests_NVME/vm-301-disk-0: Checking if dataset is changed
# [Source] Guests_NVME/vm-301-disk-0: Getting zfs properties
# [Source] CMD    > (zfs get -H -o property,value -p all Guests_NVME/vm-301-disk-0)
  [Source] Guests_NVME/vm-301-disk-0: Excluded (by --exclude-unchanged)
# [Source] Guests_NVME/vm-301-disk-1: Checking if dataset is changed
# [Source] Guests_NVME/vm-301-disk-1: Getting zfs properties
# [Source] CMD    > (zfs get -H -o property,value -p all Guests_NVME/vm-301-disk-1)
  [Source] Guests_NVME/vm-301-disk-1: Excluded (by --exclude-unchanged)
  [Source] StorePool: Excluded
# [Source] StorePool/Storage: Checking if dataset is changed
# [Source] StorePool/Storage: Getting zfs properties
# [Source] CMD    > (zfs get -H -o property,value -p all StorePool/Storage)
  [Source] StorePool/Storage: Excluded (by --exclude-unchanged)
  [Source] StorePool/vm-200-disk-0: Excluded

  #### Snapshotting
# [Source] Guests_NVME/subvol-109-disk-0: Dataset should exist
# [Source] Guests_NVME/subvol-109-disk-0: Getting snapshots
# [Source] CMD    > (zfs list -d 1 -r -t snapshot -H -o name Guests_NVME/subvol-109-disk-0)
# [Source] Guests_NVME/subvol-109-disk-0: Getting bytes written since our last snapshot
# [Source] CMD    > (zfs get -H -ovalue -p written@Guests_NVME/subvol-109-disk-0@offsite1-20240315131548 Guests_NVME/subvol-109-disk-0)
# [Source] Guests_NVME/subvol-111-disk-0: Dataset should exist
# [Source] Guests_NVME/subvol-111-disk-0: Getting snapshots
# [Source] CMD    > (zfs list -d 1 -r -t snapshot -H -o name Guests_NVME/subvol-111-disk-0)
# [Source] Guests_NVME/subvol-111-disk-0: Getting bytes written since our last snapshot
# [Source] CMD    > (zfs get -H -ovalue -p written@Guests_NVME/subvol-111-disk-0@offsite1-20240315131548 Guests_NVME/subvol-111-disk-0)
  [Source] Creating snapshots offsite1-20240315131853 in pool Guests_NVME
# [Source] CMD    > (zfs snapshot Guests_NVME/subvol-109-disk-0@offsite1-20240315131853 Guests_NVME/subvol-111-disk-0@offsite1-20240315131853)

  #### Target settings
  [Target] Keep the last 10 snapshots.
  [Target] Keep every 1 day, delete after 1 week.
  [Target] Keep every 1 week, delete after 1 month.
  [Target] Keep every 1 month, delete after 1 year.
  [Target] Receive datasets under: CloudBackup

  #### Synchronising
# [Target] CloudBackup: Checking if dataset exists
# [Target] CMD    > (zfs list CloudBackup)
# Checking target names:
# [Source] Guests_NVME/subvol-109-disk-0: -> CloudBackup/Guests_NVME/subvol-109-disk-0
# [Source] Guests_NVME/subvol-111-disk-0: -> CloudBackup/Guests_NVME/subvol-111-disk-0
# [Target] CloudBackup/Guests_NVME: Checking if dataset exists
# [Target] CMD    > (zfs list CloudBackup/Guests_NVME)
# [Source] zpool Guests_NVME: Getting zpool properties
# [Source] CMD    > (zpool get -H -p all Guests_NVME)
# [Target] zpool CloudBackup: Getting zpool properties
# [Target] CMD    > (zpool get -H -p all CloudBackup)
# [Target] CloudBackup/Guests_NVME/subvol-109-disk-0: Determining start snapshot
# [Target] CloudBackup/Guests_NVME/subvol-109-disk-0: Checking if dataset exists
# [Target] CMD    > (zfs list CloudBackup/Guests_NVME/subvol-109-disk-0)
# [Target] STDERR > cannot open 'CloudBackup/Guests_NVME/subvol-109-disk-0': dataset does not exist
! [Source] Guests_NVME/subvol-109-disk-0: FAILED: 'datetime.datetime' object has no attribute 'timestamp'
  Debug mode, aborting on first error
! Exception: 'datetime.datetime' object has no attribute 'timestamp'
Traceback (most recent call last):
  File "/usr/local/bin/zfs-autobackup", line 11, in <module>
    load_entry_point('zfs-autobackup==3.3b2', 'console_scripts', 'zfs-autobackup')()
  File "build/bdist.linux-x86_64/egg/zfs_autobackup/ZfsAutobackup.py", line 574, in cli
  File "build/bdist.linux-x86_64/egg/zfs_autobackup/ZfsAutobackup.py", line 532, in run
  File "build/bdist.linux-x86_64/egg/zfs_autobackup/ZfsAutobackup.py", line 400, in sync_datasets
  File "build/bdist.linux-x86_64/egg/zfs_autobackup/ZfsDataset.py", line 1139, in sync_snapshots
  File "build/bdist.linux-x86_64/egg/zfs_autobackup/ZfsDataset.py", line 1061, in _plan_sync
  File "build/bdist.linux-x86_64/egg/zfs_autobackup/ZfsDataset.py", line 854, in thin_list
  File "build/bdist.linux-x86_64/egg/zfs_autobackup/ZfsNode.py", line 64, in thin
AttributeError: 'datetime.datetime' object has no attribute 'timestamp'

I have tried destroying the target dataset and as you can see above, that made no difference. I tried searching for the error and it doesn't appear to be zfs/zpool but python.

I was using the beta version as recommended to fix another bug.

Please let me know if there is anything else I can provide to help debug this.

githubjsorg commented 7 months ago

Looks like rolling back to v3.2 seems to have fixed this issue but will see if the old issue of not being able to append new backups to old crops up again.

psy0rz commented 7 months ago

which python version are you using?

githubjsorg commented 7 months ago

Python 3.11.2 (main, Mar 13 2023, 12:18:29) [GCC 12.2.0] on linux

After upgrading to PVE 8 somehow zfs_autobackup was back to v3.3-beta.2.

But I can't test the release version because PIP on PVE 8 now gives this error:

error: externally-managed-environment

× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
    python3-xyz, where xyz is the package you are trying to
    install.

    If you wish to install a non-Debian-packaged Python package,
    create a virtual environment using python3 -m venv path/to/venv.
    Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
    sure you have python3-full installed.

    If you wish to install a non-Debian packaged Python application,
    it may be easiest to use pipx install xyz, which will manage a
    virtual environment for you. Make sure you have pipx installed.

    See /usr/share/doc/python3.11/README.venv for more information.

note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.

I tried following the above instructions

pipx install zfs_autobackup

but after install

# zfs-autobackup --version
zfs-autobackup v3.3-beta.2 - (c)2022 E.H.Eefting (edwin@datux.nl)

it is still showing the beta version I previously tried using to fix a previous error, which is now experiencing the same original issue in this ticket.

! [Source] Guests_NVME/subvol-109-disk-0: FAILED: 'datetime.datetime' object has no attribute 'timestamp'
  Debug mode, aborting on first error
! Exception: 'datetime.datetime' object has no attribute 'timestamp'

I reinstalled from source for v3.3-beta.2 and now somehow it is no longer throwing that error. But I am now getting these errors

! [Target] STDERR > cannot receive refquota property on CloudBackup/Guests_NVME/subvol-111-disk-0: size is less than current used or reserved space
NAME                                       PROPERTY  VALUE     SOURCE
CloudBackup/Guests_NVME/subvol-111-disk-0  refquota  none      default
Guests_NVME/subvol-111-disk-0              refquota  100G      local

I tried setting the refquota on CloudBackup but it threw the same error as zfs-autobackup

# zfs set refquota=100G CloudBackup/Guests_NVME/subvol-111-disk-0
cannot set property for 'CloudBackup/Guests_NVME/subvol-111-disk-0': size is less than current used or reserved space

If you can think of anything else I can try, I am open to try almost anything (within reason) to get this working and consistently working.

githubjsorg commented 7 months ago

Oh and in case this is helpful

# zfs version
zfs-2.2.3-pve1
zfs-kmod-2.2.2-pve1
psy0rz commented 6 months ago

the refquota is not related to zfs-autobackup, so you're getting close to solving it.

i'm also releasing 3.3-beta2 to pip, so it should be easier to install it.

psy0rz commented 6 months ago

try setting refquota to none.

you can also use zfs-autobackups --filter-properties refquota filter to prevent this issue in the future for new datasets.

please reopen if you still have issues.

githubjsorg commented 6 months ago

i'm also releasing 3.3-beta2 to pip, so it should be easier to install it.

I don't know if it just hasn't made it into pip yet but I got this:

~# pipx upgrade zfs-autobackup
zfs-autobackup is already at latest version 3.2 (location: /root/.local/pipx/venvs/zfs-autobackup)
githubjsorg commented 6 months ago

I also tried uninstall/reinstall.

~# pipx uninstall zfs-autobackup
uninstalled zfs-autobackup! ✨ 🌟 ✨
~# pipx install zfs-autobackup
⚠️  Note: zfs-autoverify was already on your PATH at /usr/local/bin/zfs-autoverify
⚠️  Note: zfs-check was already on your PATH at /usr/local/bin/zfs-check
  installed package zfs-autobackup 3.2, installed using Python 3.11.2
  These apps are now globally available
    - zfs-autobackup
    - zfs-autoverify
    - zfs-check
done! ✨ 🌟 ✨
psy0rz commented 6 months ago

I've updated the manual to install with pipx: https://github.com/psy0rz/zfs_autobackup/wiki#using-pipx

So you need to add --pip-args=--pre to get the beta version.