oetiker / znapzend

zfs backup with remote capabilities and mbuffer integration.
www.znapzend.org
GNU General Public License v3.0
612 stars 137 forks source link

ZnapZend

Build Coverage Status Gitter Releases Docker images

ZnapZend is a ZFS centric backup tool to create snapshots and send them to backup locations. It relies on the ZFS tools snapshot, send and receive to do its work. It has the built-in ability to manage both local snapshots as well as remote copies by thinning them out as time progresses.

The ZnapZend configuration is stored as properties in the ZFS filesystem itself. Keep in mind that while this only regards local ZFS properties of each configured dataset (not "inherited", not "received"), there is some domain-specific handling of recursion for certain settings based on presence and value of an org.znapzend:recursive property.

Note that while recursive configurations are well supported to set up backup and retention policies for a whole dataset subtree under the dataset to which you have applied explicit configuration, at this time pruning of such trees ("I want every dataset under var except var/tmp") is experimental: it works, but there may be rough edges which would require more development.

You probably do not want to enable ZnapZend against the root datasets of your pools due to that, but would have to be more fine-grained in your setup. This is consistent with (and due to) usage of recursive ZFS snapshots, where the command is targeted at one dataset and impacts it and all its children, allowing to get a consistent point-in-time set of snapshots across multiple datasets.

That said, for several years ZnapZend supports setting a local ZFS property org.znapzend:enabled=off (and only it) in datasets which descend from the one with a full backup retention schedule configuration (which in turn sets that its descendants should be handled per org.znapzend:recursive=off), and then exactly these "not-enabled" datasets with enabled=off setting would not be tracked with a long-term history locally or remotely.

NOTE: Implementation-wise, snapshots of the dataset with a full backup retention schedule configuration are made recursively so as to be a reliable atomic operation. Subsequently snapshots for "not-enabled" datasets are pruned. Different ZnapZend versions varied about sending such snapshots to a remote destination (e.g. as part of a recursive ZFS send stream) and pruning them there afterwards, or avoiding such sending operations.

An important take-away is that temporarily there may be a storage and traffic cost associated with "not-enabled" dataset snapshots, and that their creation and deletion is separated by time: if the host reboots (or ZnapZend process is interrupted otherwise) at the wrong moment, such snapshots may linger indefinitely and "unexpectedly" consume disk space for their uniquely referenced blocks.

Current ZnapZend releases extend this support with an ability to also set a local ZFS property org.znapzend:recursive=on in such datasets (so there would be two properties -- to enable/disable and to recurse that), with the effect that whole sub-trees of ZFS datasets can be excluded from ZnapZend retention handling with one configuration in their common ancestor dataset (previously this would require enabled=off in each excluded dataset).

This behavior can be useful, for example, on CI build hosts, where you would generally enable backups of rpool/home but would exclude the location for discardable bulk data like build roots or caches in the worker account's home.

NOTE: Technically, the code allows to further set enabled=on in certain sub-datasets of the not-enabled tree to re-enable snapshot tracking for that dataset (maybe recursively to its descendants), but this feature has not yet seen much use and feedback in real-life situations. It may be possible that you would have to pre-create the parent datasets (disabled on source) to receive regular backups from ZnapZend on remote destinations, etc.

Compilation and Installation from source Inztructionz

If your distribution does not provide a packaged version of znapzend, or if you want to get a custom-made copy of znapzend, you will need a compiler and stuff to build some of the prerequisite perl modules into binary libraries for the target OS and architecture. For run-time you will need just perl.

For a long time znapzend build required a GNU Make implementation. While this is no longer strictly the case, and at least Sun Make (as of OpenIndiana) and BSD Make (as of FreeBSD) are also known to work, the instructions below still suggest it as optional (if system-provided tools fail, fall back to gmake).

The Git checkout includes a pregenerated configure script. For a rebuild of a checkout from scratch you may also want to ./bootstrap.sh and then would need the autoconf/automake stack.

With that in place you can now utter:

ZNAPVER=0.23.2
wget https://github.com/oetiker/znapzend/releases/download/v${ZNAPVER}/znapzend-${ZNAPVER}.tar.gz
tar zxvf znapzend-${ZNAPVER}.tar.gz
cd znapzend-${ZNAPVER}
### ./bootstrap.sh
./configure --prefix=/opt/znapzend-${ZNAPVER}

NOTE: to get the current state of master branch without using git tools, you should fetch https://github.com/oetiker/znapzend/archive/master.zip

If the configure script finds anything noteworthy, it will tell you about it.

If any perl modules are found to be missing, they get installed locally into the znapzend installation. Your system perl installation will not be modified!

make
make install

Optionally (but recommended) put symbolic links to the installed binaries in the system PATH, e.g.:

ZNAPVER=0.23.2
for x in /opt/znapzend-${ZNAPVER}/bin/*; do ln -fs ../../../$x /usr/local/bin/; done

Verification Inztructionz

To make sure your resulting set of znapzend code and dependencies plays well together, you can run unit-tests with:

make check

or

./test.sh

NOTE: the two methods run same testing scripts with different handling, so might behave differently. While that can happen in practice, that would be a bug to report and pursue fixing.

Packages

Debian control files, guide on using them and experimental debian packages can be found at https://github.com/Gregy/znapzend-debian

An RPM spec file can be found at https://github.com/asciiphil/znapzend-spec

For recent versions of Fedora and RHEL 7-9 there's also a copr repository by spike (sources at https://gitlab.com/copr_spike/znapzend):

dnf copr enable spike/znapzend
dnf install znapzend

For Gentoo there's an ebuild in the gerczei overlay.

For OpenIndiana there is an IPS package at http://pkg.openindiana.org/hipster/en/search.shtml?token=znapzend&action=Search made with the recipe at https://github.com/OpenIndiana/oi-userland/tree/oi/hipster/components/sysutils/znapzend

pkg install backup/znapzend

Configuration

Use the znapzendzetup program to define your backup settings. They will be stored directly in dataset properties, and will cover both local snapshot schedule and any number of destinations to send snapshots to (as well as potentially different retention policies on those destinations). You can enable recursive configuration, so the settings would apply to all datasets under the one you configured explicitly.

Example:

znapzendzetup create --recursive\
   --pre-snap-command="/bin/sh /usr/local/bin/lock_flush_db.sh" \
   --post-snap-command="/bin/sh /usr/local/bin/unlock_db.sh" \
   SRC '7d=>1h,30d=>4h,90d=>1d' tank/home \
   DST:a '7d=>1h,30d=>4h,90d=>1d,1y=>1w,10y=>1month' root@bserv:backup/home

See the znapzendzetup manual for the full description of the configuration options.

For remote backup, znapzend uses ssh. Make sure to configure password-free login (authorized keys) for ssh to the backup target host with an account sufficiently privileged to manage its ZFS datasets under a chosen destination root.

For local or remote backup, znapzend can use mbuffer to level out the bursty nature of ZFS send and ZFS receive features, so it is quite beneficial even for local backups into another pool (e.g. on removable media or a NAS volume). It is also configured among the options set by znapzendzetup per dataset. Note that in order to use larger (multi-gigabyte) buffers you should point your configuration to a 64-bit binary of the mbuffer program. Sizing the buffer is a practical art, depending on the size and amount of your datasets and the I/O speeds of the storage and networking involved. As a rule of thumb, let it absorb at least a minute of I/O, so while one side of the ZFS dialog is deeply thinking, another can do its work.

NOTE: Due to backwards-compatibility considerations, the legacy --mbuffer=... setting applies by default to all destination datasets (and to sender, in case of --mbuffer=/path/to/mbuffer:port variant). This might work if needed programs are all found in PATH by the same short name, but fails miserably if custom full path names are required on different systems.

To avoid this limitation, ZnapZend now allows to specify custom path and buffer size settings individually for each source and destination dataset in each backup/retention schedule configuration (using the znapzendzetup program or org.znapzend:src_mbuffer etc. ZFS dataset properties directly). The legacy configuration properties would now be used as fallback defaults, and may emit warnings whenever they are applied as such.

With this feature in place, the sender may have the only mbuffer running, without requiring one on the receiver (e.g. to limit impact to RAM usage on the backup server). You may also run an mbuffer on each side of the SSH tunnel, if networking latency is random and carries a considerable impact.

The remote system does not need anything other than ZFS functionality, an SSH server, a user account with prepared SSH key based log-in (optionally an unprivileged one with zfs allow settings on a particular target dataset dedicated to receiving your trees of backed-up datasets), and optionally the local implementation of the mbuffer program. Namely, as a frequently asked concern: the remote system does not require ZnapZend nor its dependencies (perl, etc.) to be installed. (It may however be installed - e.g. if used for snapshots of that remote system's own datasets.)

Running

The znapzend daemon is responsible for doing the actual backups.

To see if your configuration is any good, run znapzend in noaction mode first.

znapzend --noaction --debug

If you don't want to wait for the scheduler to actually schedule work, you can also force immediate action by calling

znapzend --noaction --debug --runonce=<src_dataset>

then when you are happy with what you got, start it in daemon mode.

znapzend --daemonize

Best practice is to integrate znapzend into your system startup sequence, but you can also run it by hand. See the init/README.md for some inspiration.

Running by an unprivileged user

In order to allow a non-privileged user to use it, the following permissions are required on the ZFS filesystems (which you can assign with zfs allow):

Sending end: destroy,hold,mount,send,snapshot,userprop

Receiving end: create,destroy,mount,receive,userprop

Caveat Emptor: Receiver with some implementations of ZFS may have further constraints technologically. For example, non-root users with ZFS on Linux (as of 2022) may not write into a dataset with property zoned=on (including one inherited or just received -- and zfs recv -x zoned or similar options have no effect to not-replicate it), so this property has to be removed as soon as it appears on such destination host with the initial replication stream, e.g. leave a snippet like this running on receiving host before populating (zfs send -R ...) the destination for the first time:

while ! zfs inherit zoned backup/server1/rpool/rpool/zones/zone1/ROOT ; do sleep 0.1; done

You may also have to zfs allow by name all standard ZFS properties which your original datasets customize and you want applied to the copy (e.g. to eventually restore them), so the non-privileged user may zfs set them on that dataset and its descendants, e.g.: compression,mountpoint,canmount,setuid,atime,exec,dedup or perhaps you optimized the original storage with the likes of: logbias,primarycache,secondarycache,sync and note that other options may be problematic long-term if actually used by the receiving server, e.g.: refreservation,refquota,quota,reservation,encryption

Generally, check the ZnapZend service (or manual run) logs for any errors and adapt the dataset permissions on the destination pool to satisfy its implementation specifics.

Running with restricted shell

As a further security twist on using a non-privileged user on the receiving host is to restrict its shell so just a few commands may be executed. After all, you leave its gates open with remote SSH access and a private key without a passphrase lying somewhere. Several popular shells offer a restricted option, for example BASH has a -r command line option and a rbash symlink support.

NOTE: Some SSH server versions also allow to constrain the commands which a certain key-based session may use, and/or limit from which IP addresses or DNS names such sessions may be initiated. See documentation on your server's supported authorized_keys file format and key words for that extra layer.

On original server, run ssh-keygen to generate an SSH key for the sending account (root or otherwise), possibly into an uniquely named file to use just for this connection. You can specify custom key file name, non-standard port, acceptable encryption algorithms and other options with SSH config:

# ~/.ssh/config
Host znapdest
        # "HostName" to access may even be "localhost" if the backup storage
        # system can dial in to the systems it collects data from (with SSH
        # port forwarding back to itself) -- e.g. running without a dedicated
        # public IP address (consumer home network, corporate firewall).
        #HostName localhost
        HostName znapdest.domain.org
        Port 22123
        # May list several SSH keys to try:
        IdentityFile /root/.ssh/id_ecdsa-znapdest
        IdentityFile /root/.ssh/id_rsa-znapdest
        User znapzend-server1
        IdentitiesOnly yes

On receiving server (example for Proxmox/Debian with ZFS on Linux):

Gets PATH to run stuff?

:; ssh znapdest zfs list


* Dedicate a dataset (or several) you would use as destination for the znapzend
  daemon, and set ZFS permissions (see suggestions above), e.g.:
```sh
zfs create backup/server1
zfs allow -du znapzend-server1 create,destroy,mount,receive,userprop backup/server1

NOTE: When defining a "backup plan" you would have to specify a basename for mbuffer, since the restricted shell would forbid running a fully specified pathname, e.g.:

znapzendzetup edit --mbuffer=mbuffer \
   SRC '6hours=>30minutes,1week=>6hours' rpool/export \
   DST '6hours=>30minutes,1week=>6hours,2weeks=>1day,4months=>1week,10years=>1month' \
       znapdest:backup/server1/rpool/export

Running in Container

znapzend is also available as docker container image. It needs to be a privileged container depending on permissions.

docker run -d --name znapzend --device /dev/zfs --privileged \
    oetiker/znapzend:master

To configure znapzend, run in interactive mode:

docker exec -it znapzend /bin/sh
$ znapzendzetup create ...
# After exiting, restart znapzend container or send the HUP signal to
# reload config

By default, znapzend in container runs with --logto /dev/stdout. If you wish to add different arguments, overwrite them at the end of the command:

docker run --name znapzend --device /dev/zfs --privileged \
    oetiker/znapzend:master znapzend --logto /dev/stdout --runonce --debug

Be sure not to daemonize znapzend in the container, as that exits the container immediately.

Troubleshooting

By default a znapzend daemon would log its progress and any problems to local syslog as a daemon facility, so if the service misbehaves - that is the first place to look. Alternately, you can set up the service manifest to start the daemon with other logging configuration (e.g. to a file or to stderr) and perhaps with debug level enabled.

If your snapshots on the source dataset begin to pile up and not cleaned according to your expectations from the schedule you have defined, look into the logs particularly for summaries like ERROR: suspending cleanup source dataset because X send task(s) failed followed by each failed dataset name and a short verdict (e.g. snapshot(s) exist on destination, but no common found on source and destination). See above in the logs for more details, and/or disable the znapzend service temporarily (to avoid run-time conflicts) and run a manual replication:

znapzend --debug --runonce=<src_dataset>/failed/child --inherited

...to collect even more under-the-hood details about what is happening and to get ideas about fixing that. See the manual page about --recursive and --inherited modifiers to --runonce mode for more information.

Typical issues include:

NOTE: Do not forget to re-enable the znapzend service after you have rectified the problem that prevented normal functionality.

One known problem relates to automated backups of datasets whose source can get cloned, renamed and promoted - typically boot environments (the rootfs of your OS installation and ZBE for local zones on illumos/Solaris systems behave this way to benefit from snapshots during upgrades and to allow easily switching back to older version if an update went bad). At this time (see issue #503) znapzend does not handle such datasets as branches of a larger ZFS tree and with --autoCreation mode in place just makes new complete datasets on the destination pool. On one hand this is wasteful for space (unless you use deduplication which comes with other costs), and on another the histories of snapshots seen in the same-named source and destination datasets can eventually no longer expose a "last-common snapshot" and this causes an error like snapshot(s) exist on destination, but no common found on source and destination.

In case you tinkered with ZFS attributes that store ZnapZend retention policies, or potentially if you have a severe version mismatch of ZnapZend (e.g. update from a PoC or very old version), znapzendzetup list is quite useful to non-intrusively discover whatever your current version can consider to be discrepancies in your active configuration.

Finally note that yet-unreleased code from the master branch may include fixes to problems you face (see recent commits and closed pull requests), but also may introduce new bugs.

Statistics

If you want to know how much space your backups are using, try the znapzendztatz utility.

Support and Contributions

If you find a problem with znapzend, please open an Issue on GitHub but please first review if somebody posted similar symptoms or suggestions already and then chime in with your +1 there.

If you'd like to get in touch, come to Gitter.

And if you have a code or documentation contribution, please send a pull request.

Enjoy!

Dominik Hassler & Tobi Oetiker 2024-06-27