oetiker / znapzend

zfs backup with remote capabilities and mbuffer integration.
www.znapzend.org
GNU General Public License v3.0
608 stars 137 forks source link

Can't Get skipOnPreSendCmdFail to Work as Expected #337

Closed bebop350 closed 6 years ago

bebop350 commented 6 years ago

I've been trying to get this command to execute correctly (literally) all day. If someone would be so kind as to give me a functional example of how to use it, I'd be grateful. Perhaps I'm doing it right, but it's broken, or my expectations are incorrect.

  1. I create a backup with a DST pre-send-command that's designed to fail (eg: executing files that aren't there or irrelevant files): sudo znapzendzetup create SRC '90d=>12d' Source/Files DST: '90d=>12d' Target/Backup 'bin/sh /usr/local/bin/norepl.sh'

(1A. I then started the daemon, but that's likely irrelevant: sudo znapzend --skipOnPreSendCmdFail --daemonize )

  1. Then I specify skipOnPreSendCmdFail and run a backup sudo znapzend --runonce=Source/Files --skipOnPreSendCmdFail

This is the output: (https://paste.ofcode.org/3ZEGHG6CCDXyAiTpZE9ede)
A snapshot is created on the SRC but is not copied or replicated on the DST. I was expecting the snapshot to be created on the SRC but also to be copied to the DST (but not replicated), given its description "skip replication if the pre-send-command fails."

If someone could shed some light on this, again, I'd be very greatful. Thank you.

System Info: Antergos (Arch Linux based, X64, ~latest kernel) ZFS 0.7.8-1 (latest I think) znapzend v0.18.0

atj commented 6 years ago

Can you explain what you mean by "copied" vs. "replicated"?

If znapzend is started with the "--skipOnPreSendCmdFail" argument and a pre-send-command for a destination fails then no attempt will be made to replicate the snapshot to that destination. There is no "copying" step.

bebop350 commented 6 years ago

Thank you for clarifying.

I was hoping to have snapshots only copied to DST to save disk space.

Copied as in copied and then pasted (for lack of a better term): the SRC snapshot (ex: /Source/Files/.zfs/snapshot/2018-04-22-155545/) copied and then pasted to DST (ex:/Backup/Target/.zfs/snapshot/2018-04-22-155545/). This is the behavior I had expected.

As opposed to replication: SRC (Source/Files/Dir1, Source/Files/Dir2) replicated on DST (Backup/Target/Dir1, Backup/Target/Dir2), in addition to snapshot copy/paste.

Perhaps that's not viable and my understanding of snapshots/replication is incorrect. If that's the case I apologize for the confusion.

atj commented 6 years ago

Yes, I'm afraid you are slightly misunderstanding how snapshots and send/receive work in ZFS. See below:

$ zfs create tank/test
$ touch /test/file1
$ zfs snapshot tank/test@snap1
$ touch /test/file2
$ ls -l /test
total 1
-rw-r--r-- 1 root root 0 Apr 24 11:29 file1
-rw-r--r-- 1 root root 0 Apr 24 11:29 file2
$ zfs list tank/test
NAME           USED  AVAIL  REFER  MOUNTPOINT
tank/test      160K  97.6G   104K  /test
$ zfs list tank/test2
cannot open 'tank/test2': dataset does not exist
$ zfs send tank/test@snap1 | zfs recv tank/test1
$ zfs list tank/test1
NAME            USED  AVAIL  REFER  MOUNTPOINT
tank/test1       96K  97.6G    96K  /test1
$ ls -l /test1
total 1
-rw-r--r-- 1 root root 0 Apr 24 11:29 file1
$ zfs list -t filesystem,snapshot -r tank/test1
NAME                  USED  AVAIL  REFER  MOUNTPOINT
tank/test1             96K  97.6G    96K  /test1
tank/test1@snap1        0B      -    96K  -

Once you understand that snapshots are children of filesystems and therefore cannot exist independently the above should make sense.

Read the "zfs receive" section of the zfs man page and note the difference between "full" and "incremental" streams. The following page may also be helpful:

https://docs.oracle.com/cd/E18752_01/html/819-5461/gbchx.html#gbimy

bebop350 commented 6 years ago

Wow thank you for taking the time to explain that, I really appreciate it!

I was unaware of that critical detail, that snapshots "cannot exist independently." Indeed now it all makes sense.

I'll go ahead and close this. Thanks again.