yboetz / pyznap

ZFS snapshot tool written in python
GNU General Public License v3.0
197 stars 35 forks source link

Strange send behavior - perhaps documentation #42

Closed shotelco closed 5 years ago

shotelco commented 5 years ago

Set up pyznap on a SmartOS (Solaris-like) host. Looks like everything is correct except for the send functionality. I want to backup a local SAMBA/CIFS share mounted as a ZFS volume to a remote SAMBA/CIFS share. Doing a full backup, then incremental backups managed by pyznap. I configured a simple pyznap.conf. When forcing a snap, there seems to be no errors, when forcing a send, there are numerous errors. I likely do not understand how actually doing a volume backup works with pyznap. Any help would be appreciated:

backup test1

[zones/0f..trunkated..ec/data/petashare1/admin] hourly = 1 snap = yes send = yes clean = yes dest = ssh:22:root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob dest_keys = .ssh/id_rsa

compress = gzip

snap output: [root@SMB1 /opt/local/bin]# ./pyznap snap Sep 04 20:58:41 INFO: Starting pyznap... Sep 04 20:58:41 INFO: Taking snapshots... Sep 04 20:58:41 INFO: Taking snapshot zones/0f..trunkated..ec/data@pyznap_2019-09-04_20:58:41_hourly... Sep 04 20:58:41 INFO: Cleaning snapshots... Sep 04 20:58:41 INFO: Deleting snapshot zones/0f..trunkated..ec/data@pyznap_2019-09-04_19:59:46_hourly... Sep 04 20:58:41 INFO: Deleting snapshot zones/0f..trunkated..ec/data/petashare1@pyznap_2019-09-04_19:59:46_hourly... Sep 04 20:58:42 INFO: Deleting snapshot zones/0f..trunkated..ec/data/petashare1/admin@pyznap_2019-09-04_19:59:46_hourly... Sep 04 20:58:42 INFO: Deleting snapshot zones/0f..trunkated..ec/data/petashare1/backupblob@pyznap_2019-09-04_19:59:46_hourly... Sep 04 20:58:42 INFO: Finished successfully...

No errors, even though I can't find the snapshots.

send output:

[root@SMB1 /opt/local/bin]# ./pyznap send Sep 04 21:05:54 INFO: Starting pyznap... Sep 04 21:05:54 INFO: Sending snapshots... Sep 04 21:05:57 INFO: No common snapshots on root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob, sending oldest snapshot zones/0f..trunkated..ec/data@pyznap_2019-09-04_20:58:41_hourly (~12.6K)... sh[1]: lzop: not found [No such file or directory] Sep 04 21:05:59 ERROR: Error while sending to root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob: bash: lzop: command not found - cannot receive: failed to read from stream... Sep 04 21:06:00 INFO: No common snapshots on root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob/petashare1, sending oldest snapshot zones/0f..trunkated..ec/data/petashare1@pyznap_2019-09-04_20:58:41_hourly (~13.6K)... sh[1]: lzop: not found [No such file or directory] Sep 04 21:06:00 ERROR: Error while sending to root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob/petashare1: bash: lzop: command not found - cannot receive: failed to read from stream... Sep 04 21:06:01 INFO: No common snapshots on root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob/petashare1/admin, sending oldest snapshot zones/0f..trunkated..ec/data/petashare1/admin@pyznap_2019-09-04_20:58:41_hourly (~8.5M)... sh[1]: lzop: not found [No such file or directory] sh: line 1: mbuffer: not found sh: line 1: pv: not found Sep 04 21:06:04 ERROR: Error while sending to root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob/petashare1/admin: bash: lzop: command not found - cannot receive: failed to read from stream... Sep 04 21:06:04 INFO: No common snapshots on root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob/petashare1/backupblob, sending oldest snapshot zones/0f..trunkated..ec/data/petashare1/backupblob@pyznap_2019-09-04_20:58:41_hourly (~12.6K)... sh[1]: lzop: not found [No such file or directory] Sep 04 21:06:05 ERROR: Error while sending to root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob/petashare1/backupblob: bash: lzop: command not found - cannot receive: failed to read from stream... Sep 04 21:06:05 INFO: Finished successfully...

Although I was not using the cron in crontabs, for your reference, here it is: cron/crontab/pyznap: SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

!# /15 root /opt/local/bin/pyznap snap >> /var/log/pyznap.log 2>&1 !# /20 root /opt/local/bin/pyznap send >> /var/log/pyznap.log 2>&1

This may be a pyznap.conf error? Thanks in advance.

yboetz commented 5 years ago

About your config: There is no send = yes keyword. If you specify a (or multiple) dest, then pyznap will try to send to them. And you should always specify full paths, so e.g. dest_keys = /home/user/.ssh/id_rsa. This should not give you the errors you see though, as the send keyword is just ignored and the ssh connection seemed to have worked anyway. The rest of the config seems ok. You should see the snapshots after running pyznap snap using

zfs list -r zones -t snap

Now for your send problem, it seems that pyznap does not find the necessary executables for lzop, mbuffer and pv. It should check if these are available though, so it is strange that it tries to use them in the first place. Are lzop, mbuffer and pv installed on both your source and dest? Can you post the output of

which lzop
which mbuffer
which pv
lzop --version

I never used SmartOs (or Solaris) and haven't tested pyznap on it. But this seems like some of the paths are different than on linux and thus pyznap is confused. You might also have to change the PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin line in the crontab. Please also post the output of

echo $PATH
shotelco commented 5 years ago

Output of pyznap snap: [root@SMB1 /opt/local/bin]# ./pyznap snap Sep 05 07:20:36 INFO: Starting pyznap... Sep 05 07:20:36 INFO: Taking snapshots... Sep 05 07:20:36 INFO: Taking snapshot zones/0f..trunkated..ec/data@pyznap_2019-09-05_07:20:36_hourly... Sep 05 07:20:36 INFO: Cleaning snapshots... Sep 05 07:20:36 INFO: Deleting snapshot zones/0f..trunkated..ec/data@pyznap_2019-09-04_20:58:41_hourly... Sep 05 07:20:37 INFO: Deleting snapshot zones/0f..trunkated..ec/data/petashare1@pyznap_2019-09-04_20:58:41_hourly... Sep 05 07:20:37 INFO: Deleting snapshot zones/0f..trunkated..ec/data/petashare1/admin@pyznap_2019-09-04_20:58:41_hourly... Sep 05 07:20:37 INFO: Deleting snapshot zones/0f..trunkated..ec/data/petashare1/backupblob@pyznap_2019-09-04_20:58:41_hourly... Sep 05 07:20:38 INFO: Finished successfully...

zfs list -r zones -t snap output of remote/dest SMB host: ~]# zfs list -r zones -t snap cannot open '-t': dataset does not exist cannot open 'snap': dataset does not exist NAME USED AVAIL REFER MOUNTPOINT zones 159G 2.48T 796K /zones zones/90..trunkated..976 1.47G 1.46T 1.79G /zones/90..trunkated..76 zones/90..trunkated..76/data 246M 1.46T 88K /zones/90..trunkated..76/data zones/90..trunkated..76/data/home 245M 1.46T 88K /home zones/90..trunkated..76/data/home/admin 116K 1.46T 116K /home/admin zones/90..trunkated..76/data/home/backupblob 245M 1.46T 245M /home/backupblob [root@smb01DC ~]#

zfs list -r zones -t snap output of local/source SMB host: ~]# zfs list -r zones -t snap cannot open '-t': dataset does not exist cannot open 'snap': dataset does not exist NAME USED AVAIL REFER MOUNTPOINT zones 171G 8.61T 603K /zones zones/0f..trunkated..ec 440M 99.6G 496M /zones/0f..trunkated..ec zones/0f..trunkated..ec/data 8.51M 99.6G 38.1K /zones/0f..trunkated..ec/data zones/0f..trunkated..ec/data/petashare1 8.47M 1.94G 41.7K /petashare1 zones/0f..trunkated..ec/data/petashare1/admin 8.37M 1.94G 8.35M /petashare1/admin zones/0f..trunkated..ec/data/petashare1/backupblob 38.1K 1.94G 38.1K /petashare1/backupblob [root@SMB1 ~]#

Other requested outputs: ~]# which lzop no lzop in /usr/local/sbin /usr/local/bin /opt/local/sbin /opt/local/bin /usr/sbin /usr/bin /sbin [root@SMB1 ~]# which mbuffer no mbuffer in /usr/local/sbin /usr/local/bin /opt/local/sbin /opt/local/bin /usr/sbin /usr/bin /sbin [root@SMB1 ~]# which pv no pv in /usr/local/sbin /usr/local/bin /opt/local/sbin /opt/local/bin /usr/sbin /usr/bin /sbin [root@SMB1 ~]# lzop --version -bash: lzop: command not found [root@SMB1 ~]# echo $PATH /usr/local/sbin:/usr/local/bin:/opt/local/sbin:/opt/local/bin:/usr/sbin:/usr/bin:/sbin [root@SMB1 ~]#

lzop. mbuffer, & pv are not installed on either source or dest. As a note, the "..trunkated.." I just put in to replace very long strings that looks like this for ease of viewing: 0f986a00-9397-4679-e6de-b8594fe05cec

I can probably install lzop. mbuffer, & pv if needed. Thanks for the quick response! Thoughts?

yboetz commented 5 years ago

This here is very strange:

# zfs list -r zones -t snap
cannot open '-t': dataset does not exist
cannot open 'snap': dataset does not exist

The option -t snap should show snapshots. What happens when you use zfs list -t snap -r zones?

The error you see in pyznap send comes from different behavior of which on linux and SmartOS it seems. pyznap uses it to check if certain commands are available, e.g. lzop, mbuffer and pv. So installing those certainly will make it work, but this is a bug in pyznap that maybe should be fixed. Can you run these commands and send me the output?

which zfs ; echo $?
which lzop ; echo $?
shotelco commented 5 years ago

Regarding the # zfs list -r zones -t snap command, the system is not accepting the combo command, but run individually, there is a response on local/source:

~]# zfs list -t snap NAME USED AVAIL REFER MOUNTPOINT zones/0f..trunkated..ec/data@pyznap_2019-09-05_07:20:36_hourly 0 - 38.1K - zones/0f..trunkated..cec/data/petashare1@pyznap_2019-09-05_07:20:36_hourly 27.2K - 41.7K - zones/0f..trunkated..05cec/data/petashare1/admin@pyznap_2019-09-05_07:20:36_hourly 0 - 8.35M - zones/0f..trunkated..ec/data/petashare1/backupblob@pyznap_2019-09-05_07:20:36_hourly 0 - 38.1K -

[root@SMB1 ~]# zfs list -r zones NAME USED AVAIL REFER MOUNTPOINT zones 171G 8.61T 601K /zones zones/0f..trunkated..ec 440M 99.6G 496M /zones/0f..trunkated..ec zones/0f..trunkated..ec/data 8.49M 99.6G 38.1K /zones/0f..trunkated..ec/data zones/0f..trunkated..ec/data/petashare1 8.45M 1.94G 41.7K /petashare1 zones/0f..trunkated..ec/data/petashare1/admin 8.35M 1.94G 8.35M /petashare1/admin zones/0f..trunkated..ec/data/petashare1/backupblob 38.1K 1.94G 38.1K /petashare1/backupblob [root@SMB1 ~]# zfs list -r zones -t snap cannot open '-t': dataset does not exist cannot open 'snap': dataset does not exist NAME USED AVAIL REFER MOUNTPOINT zones 171G 8.61T 601K /zones zones/0f..trunkated..ec 440M 99.6G 496M /zones/0f..trunkated..ec zones/0f..trunkated..ec/data 8.49M 99.6G 38.1K /zones/0f..trunkated..ec/data zones/0f..trunkated..ec/data/petashare1 8.45M 1.94G 41.7K /petashare1 zones/0f..trunkated..ec/data/petashare1/admin 8.35M 1.94G 8.35M /petashare1/admin zones/0f..trunkated..ec/data/petashare1/backupblob 38.1K 1.94G 38.1K /petashare1/backupblob

Response on remote/dest (I manually made 2 test snapshots) ~]# zfs list -t snap NAME USED AVAIL REFER MOUNTPOINT zones/90..trunkated..76/data/home/backupblob@backup 0 - 245M - zones/90..trunkated..76/data/home/backupblob@backup1 0 - 245M -

Requested outputs: ~]# which zfs ; echo $? /usr/sbin/zfs 0 ~]# which lzop ; echo $? no lzop in /usr/local/sbin /usr/local/bin /opt/local/sbin /opt/local/bin /usr/sbin /usr/bin /sbin 1

After the above actions, I then installed (only on local/source), lzop, mbuffer, & pv: ~]# which lzop /opt/local/bin/lzop ~]# lzop --version lzop 1.04 LZO library 2.10 ~]# which mbuffer /opt/local/bin/mbuffer ~]# mbuffer --version mbuffer version 20180625 ~]# which pv /opt/local/bin/pv ~]# pv --version pv 1.6.6

After installing these, testing the following: ]# ./pyznap snap Sep 05 17:50:10 INFO: Starting pyznap... Sep 05 17:50:10 INFO: Taking snapshots... Sep 05 17:50:10 INFO: Taking snapshot zones/0f..trunkated..ec/data@pyznap_2019-09-05_17:50:10_hourly... Sep 05 17:50:11 INFO: Cleaning snapshots... Sep 05 17:50:11 INFO: Deleting snapshot zones/0f..trunkated..ec/data@pyznap_2019-09-05_07:20:36_hourly... Sep 05 17:50:11 INFO: Deleting snapshot zones/0f..trunkated..ec/data/petashare1@pyznap_2019-09-05_07:20:36_hourly... Sep 05 17:50:11 INFO: Deleting snapshot zones/0f..trunkated..ec/data/petashare1/admin@pyznap_2019-09-05_07:20:36_hourly... Sep 05 17:50:11 INFO: Deleting snapshot zones/0f..trunkated..ec/data/petashare1/subrigo@pyznap_2019-09-05_07:20:36_hourly... Sep 05 17:50:12 INFO: Finished successfully...

]# ./pyznap send Sep 05 17:51:09 INFO: Starting pyznap... Sep 05 17:51:09 INFO: Sending snapshots... Sep 05 17:51:19 ERROR: No common snapshots on root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob, but snapshots exist. Not sending... Sep 05 17:51:19 INFO: No common snapshots on root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob/petashare1, sending oldest snapshot zones/0f..trunkated..ec/data/petashare1@pyznap_2019-09-05_17:50:10_hourly (~13.6K)... Sep 05 17:51:20 ERROR: Error while sending to root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob/petashare1: bash: lzop: command not found - cannot receive: failed to read from stream... Sep 05 17:51:20 INFO: No common snapshots on root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob/petashare1/admin, sending oldest snapshot zones/0f..trunkated..ec/data/petashare1/admin@pyznap_2019-09-05_17:50:10_hourly (~8.5M)... 2.52MiB 0:00:00 [3.75MiB/s] [==============> ] 29% mbuffer: error: outputThread: error writing to at offset 0x2a0000: Broken pipe mbuffer: warning: error during output to : Broken pipe Sep 05 17:51:20 ERROR: Error while sending to root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob/petashare1/admin: bash: lzop: command not found - cannot receive: failed to read from stream - mbuffer: error: outputThread: error writing to at offset 0x0: Broken pipe - mbuffer: warning: error during output to : Broken pipe... Sep 05 17:51:21 INFO: No common snapshots on root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob/petashare1/subrigo, sending oldest snapshot zones/0f..trunkated..ec/data/petashare1/backupblob@pyznap_2019-09-05_17:50:10_hourly (~12.6K)... Sep 05 17:51:21 ERROR: Error while sending to root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob/petashare1/backupblob: bash: lzop: command not found - cannot receive: failed to read from stream... Sep 05 17:51:21 INFO: Finished successfully...

Note: In the above pyznap snap and send operations, the remote/dest host does not have lzop, mbuffer, or pv installed. I will install on remote side (takes a bit of time for this) and try the snap & send operations again and update with results. Is lzop, mbuffer & pv a requirement on both nodes? Is pyznap required on remote/dest node? Please review what we have so far. Thanks!

yboetz commented 5 years ago

mbuffer, lzop and pv are not requirements, but they are useful. pyznap tests if these are available and uses them if so. But on SmartOS these tests seem to give the wrong result and pyznap thinks that they are there even though they are not installed. I will try to fix this on the weekend and release a new version. I would be glad if you could test it then. For now you can just install mbuffer, lzop and pv and then pyznap send should work. In the config you could also set compress = none, and then pyznap does not use lzop.

As for this error here:

Sep 05 17:51:19 ERROR: No common snapshots on root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob, but snapshots exist. Not sending...

You should delete the snapshots you took manually on the dest, as pyznap will not want to overwrite anything and thus will not send if there are already snapshots that do not match the source.

shotelco commented 5 years ago

Alright, I have installed lzop and pv on remote/dest (apparently mbuffer was already installed)

Then, from the local/source ran both snap & send: ~]# cd /opt/local/bin [root@SMB1 /opt/local/bin]# ./pyznap snap Sep 05 19:00:36 INFO: Starting pyznap... Sep 05 19:00:36 INFO: Taking snapshots... Sep 05 19:00:36 INFO: Taking snapshot zones/0f..trunkated..ec/data@pyznap_2019-09-05_19:00:36_hourly... Sep 05 19:00:36 INFO: Cleaning snapshots... Sep 05 19:00:36 INFO: Deleting snapshot zones/0f..trunkated..ec/data@pyznap_2019-09-05_17:50:10_hourly... Sep 05 19:00:37 INFO: Deleting snapshot zones/0f..trunkated..ec/data/petashare1@pyznap_2019-09-05_17:50:10_hourly... Sep 05 19:00:37 INFO: Deleting snapshot zones/0f..trunkated..ec/data/petashare1/admin@pyznap_2019-09-05_17:50:10_hourly... Sep 05 19:00:37 INFO: Deleting snapshot zones/0f..trunkated..ec/data/petashare1/backupblob@pyznap_2019-09-05_17:50:10_hourly... Sep 05 19:00:37 INFO: Finished successfully...

[root@SMB1 /opt/local/bin]# ./pyznap send Sep 05 19:00:44 INFO: Starting pyznap... Sep 05 19:00:44 INFO: Sending snapshots... Sep 05 19:00:51 ERROR: No common snapshots on root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob, but snapshots exist. Not sending... Sep 05 19:00:51 INFO: No common snapshots on root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob/petashare1, sending oldest snapshot zones/0f..trunkated..ec/data/petashare1@pyznap_2019-09-05_19:00:36_hourly (~13.6K)... Sep 05 19:00:52 INFO: root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob/petashare1 is up to date... Sep 05 19:00:52 INFO: No common snapshots on root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob/petashare1/admin, sending oldest snapshot zones/0f..trunkated..ec/data/petashare1/admin@pyznap_2019-09-05_19:00:36_hourly (~8.5M)... 8.58MiB 0:00:04 [1.73MiB/s] [====================================================>] 100% Sep 05 19:00:59 INFO: root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob/petashare1/admin is up to date... Sep 05 19:01:01 INFO: No common snapshots on root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob/petashare1/backupblob, sending oldest snapshot zones/0f..trunkated..ec/data/petashare1/backupblob@pyznap_2019-09-05_19:00:36_hourly (~12.6K)... Sep 05 19:01:02 INFO: root@192.168.3.105:zones/90..trunkated..76/data/home/subrigo/petashare1/backupblob is up to date... Sep 05 19:01:02 INFO: Finished successfully...

Looks like pyznap sent something somewhere...but now I can't locate where it sent whatever it sent on the remote/dest host. So, on the remote/dest host I ran:

~]# zfs list -t snap NAME USED AVAIL REFER MOUNTPOINT zones/90..trunkated..76/data/home/admin@pyznap_2019-09-05_19:00:36_hourly 0 - 96K - zones/90..trunkated..76/data/home/admin/petashare1@pyznap_2019-09-05_19:00:36_hourly 0 - 104K - zones/90..trunkated..76/data/home/admin/petashare1/admin@pyznap_2019-09-05_19:00:36_hourly 0 - 8.43M - zones/90..trunkated..76/data/home/admin/petashare1/backupblob@pyznap_2019-09-05_19:00:36_hourly 0 - 96K - zones/90..trunkated..76/data/home/backupblob@backup 0 - 245M - zones/90..trunkated..76/data/home/backupblob@backup1 0 - 245M - zones/90..trunkated..76/data/home/backupblob/petashare1@pyznap_2019-09-05_19:00:36_hourly 0 - 104K - zones/90..trunkated..76/data/home/backupblob/petashare1/admin@pyznap_2019-09-05_19:00:36_hourly 0 - 8.43M - zones/90..trunkated..76/data/home/backupblob/petashare1/backupblob@pyznap_2019-09-05_19:00:36_hourly 0 - 96K -

Still confused, I looked at: ~]# cd /home [root@smb01DC /home]# ls admin backupblob [root@smb01DC /home]# cd backupblob [root@smb01DC /home/backupblob]# ls 'MULTACOM - 2015 Type 2 SOC 2 - Report.pdf' abc.txt src-backup 'backupblob Employee Manual - Rev B4_book.docx' adc.txt backupblob-petashare1 [root@smb01DC /home/backupblob]# cd backupblob-petashare1/ [root@smb01DC /home/backupblob/backupblob-petashare1]# ls admin [root@smb01DC /home/subrigo/backupblob-petashare1]# cd admin [root@smb01DC /home/subrigo/backupblob-petashare1/admin]# ls [root@smb01DC /home/subrigo/backupblob-petashare1/admin]#

~]# cd /home/admin [root@smb01DC /home/admin]# ls [root@smb01DC /home/admin]#

Note: I tried changing the pyznap.conf a bit to point to "admin" instead of backupblob as there are 2 mount points on the local/source SMB host....:

backup test1

[zones/0f..trunkated..ec/data] hourly = 1 snap = yes clean = yes dest = ssh:22:root@192.168.3.105:zones/90..trunkated..76/data/home/admin dest_keys = .ssh/id_rsa

compress = gzip

But still can't locate what pyznap sent, or where it sent it, unless its in some snapshot file?

Per your recommendation, I removed (zfs destroy) the 2 test snapshots I created. zfs destroy zones/90..trunkated..76/data/home/backupblob@backup1 (and @backup2) They now longer show up in zfs list -t snap

I reverted back to the original pyznap.conf:

backup test1

[zones/0f986a00-9397-4679-e6de-b8594fe05cec/data] hourly = 1 snap = yes clean = yes dest = ssh:22:root@192.168.3.105:zones/90225dd3-d5c8-c7c5-df64-cb68397fa976/data/home/backupblob dest_keys = .ssh/id_rsa

compress = gzip

Re-ran ]# ./pyznap send and got a new error: [root@SMB1 /opt/local/bin]# ./pyznap send Sep 05 19:59:10 INFO: Starting pyznap... Sep 05 19:59:10 INFO: Sending snapshots... Sep 05 19:59:16 INFO: No common snapshots on root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob, sending oldest snapshot zones/0f..trunkated..ec/data@pyznap_2019-09-05_19:00:36_hourly (~12.6K)... Sep 05 19:59:16 ERROR: Error while sending to root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob: cannot unmount '/home/backupblob': Device busy - SMF Initialization problems..svc:/network/nfs/server:default - SMF Initialization problems..svc:/network/nfs/server:default - SMF Initialization problems..svc:/network/nfs/server:default - SMF Initialization problems..svc:/network/nfs/server:default - SMF Initialization problems..svc:/network/nfs/server:default - SMF Initialization problems..svc:/network/nfs/server:default - SMF Initialization problems..svc:/network/nfs/server:default - SMF Initialization problems..svc:/network/nfs/server:default - SMF Initialization problems..svc:/network/nfs/server:default - SMF Initialization problems..svc:/network/nfs/server:default - SMF Initialization problems..svc:/network/nfs/server:default... Sep 05 19:59:16 INFO: root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob/petashare1 is up to date... Sep 05 19:59:17 INFO: root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob/petashare1/admin is up to date... Sep 05 19:59:17 INFO: root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob/petashare1/backupblob is up to date... Sep 05 19:59:17 INFO: Finished successfully...

Now when checking remote/dest zfs list, there are new datasets (which I think pyznap created), but which I can not seem to get to - and they are not visible via SMB: ~]# zfs list NAME USED AVAIL REFER MOUNTPOINT zones 159G 2.48T 796K /zones zones/90..trunkated..76 1.48G 1.46T 1.79G /zones/90..trunkated..76 zones/90..trunkated..76/data 263M 1.46T 88K /zones/90..trunkated..76/data zones/90..trunkated..76/data/home 263M 1.46T 88K /home zones/90..trunkated..76/data/home/admin 8.71M 1.46T 96K /home/admin zones/90..trunkated..76/data/home/admin/petashare1 8.62M 1.46T 104K /home/admin/petashare1 zones/90..trunkated..76/data/home/admin/petashare1/admin 8.43M 1.46T 8.43M /home/admin/petashare1/admin zones/90..trunkated..76/data/home/admin/petashare1/backupblob 96K 1.46T 96K /home/admin/petashare1/backupblob zones/90..trunkated..76/data/home/backupblob 254M 1.46T 245M /home/backupblob zones/90..trunkated..76/data/home/backupblob/petashare1 8.62M 1.46T 104K /home/backupblob/petashare1 zones/90..trunkated..76/data/home/backupblob/petashare1/admin 8.43M 1.46T 8.43M /home/backupblob/petashare1/admin zones/90..trunkated..76/data/home/backupblob/petashare1backupblob 96K 1.46T 96K /home/backupblob/petashare1/backupblob

I think I have some mountpoints on my hosts that dont conform to the pzynap.conf Would you mind reviewing the pyznap.conf? Seems like pyznap is now sending something somewhere on the remote/dest host...but for whatever reason isn't visible to me.

UPDATE: After looking through the SmartOS help, the error: "SMF Initialization problems..svc:/network/nfs/server:default..." is generated by SmartOS and considered a "cosmetic error" (one that displays as error, but is not important and does not affect the process. So for now, lets ignore that

yboetz commented 5 years ago

For this error:

Sep 05 19:59:16 ERROR: Error while sending to root@192.168.3.105:zones/90..trunkated..76/data/home/backupblob: cannot unmount '/home/backupblob': Device busy 

You should not have anything open on the dest that uses /home/backupblob, as it needs to be unmounted for receiving. This also means not being logged in and having that folder open. It is best to not have the backup dest mounted at all unless you need it.

You should maybe completely destroy the backup and retry (if it's not too much data), just to make sure there's nothing left after the pyznap errors from before. So maybe destroy and recreate zones/90225dd3-d5c8-c7c5-df64-cb68397fa976/data/home/backupblob on the dest. Then for your config, you only need to change dest_keys = /home/user/.ssh/id_rsa with a full path, the rest is ok. But you need to have more than 1 hourly snapshot, because for incremental backups pyznap needs common old snapshots. If you only have 1 hourly, then this one will be destroyed every hour and will not be usable for incremental backup anymore. General rule: If you do pyznap send every hour, keep a few hourlys (at least two, more to be save), if you do pyznap send once per day, keep at least several dailys, if you do pyznap send once a week, keep several weekly, etc.

yboetz commented 5 years ago

If you do pyznap send as in your last config, then your data should be on the dest under zones/90225dd3-d5c8-c7c5-df64-cb68397fa976/data/home/backupblob, so wherever this is mounted. If I read your output correctly then they should be mounted under /home/backupblob. You can check the mountpoint and IF the filesystem is mounted with:

zfs list -o name,mounted,mountpoint
shotelco commented 5 years ago
[root@smb01DC ~]# zfs list -o name,mounted,mountpoint
NAME                                                                             MOUNTED  MOUNTPOINT
zones                                                                                 no  /zones
zones/90..trunkated..76                                           yes  /zones/90..trunkated..76
zones/90..trunkated..76/data                                      yes  /zones/90..trunkated..76/data
zones/90..trunkated..76/data/home                                 yes  /home
zones/90..trunkated..76/data/home/admin                            no  /home/admin
zones/90..trunkated..76/data/home/admin/petashare1                 no  /home/admin/petashare1
zones/90..trunkated..76/data/home/admin/petashare1/admin           no  /home/admin/petashare1/admin
zones/90..trunkated..76/data/home/admin/petashare1/backupblob         no  /home/admin/petashare1/backupblob
zones/90..trunkated..76/data/home/backupblob                         yes  /home/backupblob
zones/90..trunkated..76/data/home/backupblob/petashare1               no  /home/backupblob/petashare1
zones/90..trunkated..76/data/home/backupblob/petashare1/admin         no  /home/backupblob/petashare1/admin
zones/90..trunkated..76/data/home/backupblob/petashare1/backupblob       no  /home/backupblob/petashare1/backupblob

I take it that when not mounted, I can not see or access these? I will need to find a way to maintain a SMB connection (for Windows clients) for both local/source and remote/dest - as these clients should be able to access the backups to browse and recover if necessary.

yboetz commented 5 years ago

You can see that these three filesystems are not mounted:

zones/90..trunkated..76/data/home/backupblob/petashare1               no  /home/backupblob/petashare1
zones/90..trunkated..76/data/home/backupblob/petashare1/admin         no  /home/backupblob/petashare1/admin
zones/90..trunkated..76/data/home/backupblob/petashare1/backupblob       no  /home/backupblob/petashare1/backupblob

So obviously you wont see them in samba. You can mount them manually using

zfs mount zones/90..trunkated..76/data/home/backupblob/petashare1

and similar for the other two.

shotelco commented 5 years ago

Thanks. I was wondering why I can not get to them (the un-mounted filesystems) as root in the console. I believe until they are mounted, they are not part of the filesystem tree structure. Also, did pyznap create these filesystems when it performed the zfs send? If so, I probably didn't have the correct file paths, as I wanted to place the backups in the existing Mounted filesystems. i.e. /home/backupblob. Otherwise, one would need to manually mount any new filesystems pyznap creates (assuming thats what created these)?

shotelco commented 5 years ago

Also, would you recommend I use the output you requested for: ~]# echo $PATH /usr/local/sbin:/usr/local/bin:/opt/local/sbin:/opt/local/bin:/usr/sbin:/usr/bin:/sbin to replace the PATH string I currently have in the cron? cron/crontab/pyznap: SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

!# /15 root /opt/local/bin/pyznap snap >> /var/log/pyznap.log 2>&1 !# /20 root /opt/local/bin/pyznap send >> /var/log/pyznap.log 2>&1

yboetz commented 5 years ago

Also, did pyznap create these filesystems when it performed the zfs send?

Yes, pyznap creates any subfilesystems. You have zones/90..trunkated..76/data/home/backupblob mounted on /home/backupblob, so any subfilesystems of that will be mounted correctly in there. You just need to tell zfs to actually mount them.

Also, would you recommend I use the output you requested for: ~]# echo $PATH /usr/local/sbin:/usr/local/bin:/opt/local/sbin:/opt/local/bin:/usr/sbin:/usr/bin:/sbin to replace the PATH string I currently have in the cron?

Yes, you should use the SmartOS paths.

yboetz commented 5 years ago

Could you try again with the new release v1.4.3? Would be great if you could uninstall lzop, mbuffer and pv again and then run pyznap, to see if pyznap correctly does not use them and then finishes without error. After that you can install them again.

shotelco commented 5 years ago

Will do and follow up shortly.