Open pando85 opened 6 years ago
Hm I specially made pyznap with recursive snapshots in mind. I think I would have to do some restructuring to allow for non-recursive snapshots. I'll have to think about this.
Hi, I'm thinking that we need this more and more. A use case below; Localy replicating zroot to storage/zroot. Having other datasets under storage eg; storage/media storage/downloads...
Then the config could look like this, snapshotting everything under storage except storage/zroot. Altough due to recursive snapshots this config is not possible, correct?
[zroot] retention policy snap = yes clean =yes dest = storage/zroot
[storage] retention policy snap = yes clean = yes
[storage/zroot] snap = no clean = yes
Yes, that will not be possible, as you would take additional snapshots in storage/zroot
that would mess with zfs send/recv. In this case you would have to specify a retention policy for each dataset in storage
, so
[storage/media]
retention policy
snap = yes
clean = yes
[storage/downloads]
retention policy
snap = yes
clean = yes
[storage/zroot]
snap = no
clean = yes
A bit more configuration, but now will work as intended.
The thing is, I specifically made pyznap with recursive snapshots in mind, as I wanted atomic snapshots across all children of a dataset. For that I have to use recursive snapshots. If I were to allow non-recursive snapshots, then this would not be the case anymore and I would have to go through each child dataset and take a snapshot if the policy says so. So this is a design choice that I'd like to keep like this. I would have to think if this is possible in any other way, maybe taking snapshots then immediately deleting the ones that shouldn't be recursive...
If you want to have snapshots in all child datasets except for ones then you could just specify it in your config like this
[storage]
retention policy
snap = yes
clean = yes
[storage/downloads]
frequently = 0
hourly = 0
...
snap = no
clean = yes
Now storage
and all its children get snapshoted according to policy, including downloads
, but whenever you clean snapshots all of the snapshots of downloads
will be deleted. If you take snapshots with the pyznap snap --full
option (or just pyznap snap
), then snapshots will be taken recursively and then immediately deleted where you don't want them.
But this does not work for zfs send destinations, as you want to keep snapshots there and not take new ones. So for that you would have to do it like I described above.
Edit: Note that for the second example to work, you need to explicitly set all values to 0 in the retention policy to overwrite the parent settings. Options not set will be overwritten by parent values.
I see, do you think you could maybe add an extra config option; eg "recursive = yes", this will take snapshots with -r. On the other hand when "recursive=no" is set, it loops through underlying datasets.
Just a thought as how you could possibly tackle this.
Yes that might be a possibility. I'll have to see how hard that is to implement and how much time I have :).
Thanks for considering
Note that the following command actually works atomically (only within the same pool), so if the command can be built in the script (and it's not too long) it works.
zfs snapshot rpool/dataset1@snapshot rpool/dataset2/snapshot [-r]
Hm that is quite interesting, thanks.
At the moment there is a 'ZFSDataset' class that has a 'snapshot' function that takes a snapshot of that dataset only (optionally with -r specified). So I would have to rewrite that a bit s.t. multiple snapshots across the same pool can be taken in one command.
pyznap with recursive snapshot as an option can be the best backup solution.
I use proxmox and other systems with zfs. I tried znapzend, sanoid/syncoid and other solution.
Znapzend - I have several servers but due to ZFS params maintenance is horrible and has not snap / send separation Sanoid - very good but has not atomic snaps and capability to exclude sending unnecessary snaps pyznap - can not create not recursive snapshots
I have not had time yet, it's still on my to do list for pyznap... For now you can use the workaround described above. If you only want to take snapshots of child filesystems, you can also set up a policy similar to mine:
[rpool]
hourly = 24
daily = 7
weekly = 4
monthly = 6
snap = no
clean = yes
[rpool/ROOT/ubuntu]
snap = yes
[rpool/home]
snap = yes
[rpool/var/log]
snap = yes
[rpool/opt]
snap = yes
Here you specify the policy at the root (rpool) level, but set snap = no
and then only activate the policy for child filesystems.
A few things… with zfs channel programs you can make atomic snapshots of everything and they don't have to be recursive It's also a bad idea to use the root of the dataset for files if possible. There are bugs like space isn't freed until unmount/export and other edge cases. What I do is make a pool, set canmount=off, then make pool/files and have it mount overtop pool.
As @redmop said, you can take atomic snapshots within the same pool by specifying
zfs snapshot rpool/dataset1@snapshot rpool/dataset2/snapshot [-r]
But this would need some restructuring of the python code, as at the moment every dataset is a class instance and calls its own snapshot
method with the zfs snapshot dataset@snapname
command. Putting multiple of those commands together is not possible in the current code.
We have stuff in a pool that we want to snap, but not snap a child dataset which is a zfs send destination. e.g.:
[srv]
snap = yes
[srv/wallet]
snap = no
This is currently impossible.
Here is a pull request adding the config option to run non-recursive snapshots. https://github.com/yboetz/pyznap/pull/108 I am currently running it in my homelab system.
I basically list on the config each dataset target with the non-recursive option so that I can choose each sub-tree to snapshot differently.
It takes snapshots for all childs but I'm using docker in my server and take that snapshots involve taking snapshots for all docker driver stuff.
This doesn't make sense for me when I only want to backup my zroot volume.
It could be fix with a new config option like
recursively = no
.Thanks for your software!