yboetz / pyznap

ZFS snapshot tool written in python
GNU General Public License v3.0
198 stars 35 forks source link

Issue sending snapshot #95

Open killmasta93 opened 1 year ago

killmasta93 commented 1 year ago

Hi I was wondering if someone could shed some light, Currently sending snapshot to another pool but does not show the disk, it shows that its sending but in the pool does not show the vm-100-disk-0


vmbaks                    1.49G   898G       24K  /vmbaks
vmbaks/data               1.49G   898G       24K  /vmbaks/data
root@prometheus:/# pyznap send
Jan 26 23:32:02 INFO: Starting pyznap...
Jan 26 23:32:02 INFO: Sending snapshots...
Jan 26 23:32:02 INFO: No common snapshots on vmbaks/data, sending oldest snapshot rpool/data/vm-100-disk-0@pyznap_2023-01-24_11:00:02_frequent (~8.8G)...

the config


[rpool/data/vm-100-disk-0]
frequent = 14
snap = yes
clean = yes
dest = vmbaks/data

Thank you

yboetz commented 1 year ago

How exactly does it not show, when using zfs list?

killmasta93 commented 1 year ago

Thank you so much for the reply, This is what i get

root@prometheus:~# zfs list
NAME                       USED  AVAIL     REFER  MOUNTPOINT
rpool                      322G   577G      104K  /rpool
rpool/ROOT                7.15G   577G       96K  /rpool/ROOT
rpool/ROOT/pve-1          7.15G   577G     7.15G  /
rpool/data                 315G   577G       96K  /rpool/data
rpool/data/vm-100-disk-0  5.86G   577G     5.74G  -
rpool/data/vm-101-disk-1  94.5G   577G     85.3G  -
rpool/data/vm-102-disk-2   152G   577G      149G  -
rpool/data/vm-103-disk-0  7.48G   577G     4.59G  -
rpool/data/vm-103-disk-1  20.3M   577G     20.3M  -
rpool/data/vm-104-disk-0  9.77G   577G     8.10G  -
rpool/data/vm-105-disk-0  8.33G   577G     4.42G  -
rpool/data/vm-105-disk-1  33.3G   577G     32.6G  -
rpool/data/vm-106-disk-0  4.05G   577G     2.55G  -
vmbaks                    3.97G   895G       24K  /vmbaks
vmbaks/data               3.97G   895G     3.92G  -

In theory it should show

vmbaks/data/vm-100-disk-0

Thank you

yboetz commented 1 year ago

With that config you're sending vm-100-disk-0 directly to vmbaks/data, not the sub dataset. You have to specify the full path in the config. You should destroy the dataset and recreate it with the correct path, then send to it again.

killmasta93 commented 1 year ago

thank you so much for the reply, so this is the outcome

root@prometheus:/#   pyznap send --dest-auto-create
Feb 02 09:13:28 INFO: Starting pyznap...
Feb 02 09:13:28 INFO: Sending snapshots...
Feb 02 09:13:28 ERROR: Destination vmbaks/data/vm-100-disk-0 does not exist, manually create it or use "dest-auto-create" option...
Feb 02 09:13:28 INFO: Finished successfully...

and the config

[rpool/data/vm-100-disk-0]
frequent = 14
snap = yes
clean = yes
dest = vmbaks/data/vm-100-disk-0

Thank you