Open deliciouslytyped opened 4 years ago
I'm told apparently mounting on top of read only directories should work fine if the directories already exist, but zfs on linux does something with deleting and creating mount points, or something, such that the snapshots I used in the example above didn't have the mount directories present.
If this is true, a recursive mount command should work fine if all it does is a recursive mount, as long as the mount point directories are made to work.
Note to self, references I have used here: https://serverfault.com/questions/450818/recursively-mounting-zfs-filesystems https://serverfault.com/questions/340837/how-to-delete-all-but-last-n-zfs-snapshots
Motivation
There is a feature to take snapshots recursively;
snapshot -r
; however there is no integrated functionality that I could find for handling such snapshot trees. I think integrating such functionality would be warranted because it means there is symmetry between data storage and recovery in zfs.Personally, I'd like to take a zfs snapshot to get a frozen filesystem, from which I can then safely feed to the restic backup tool. The hidden .zfs/snapshot directories would provide a great candidate for serving the files to restic, however they only contain the contents of the sole dataset, but not the files of the child datasets.
I'm not the only one with a usecase warranting this, because 5 years ago someone wrote a script doing something like it, which is linked below.
Attempts
Due to the read only nature of the snapshots, something like the following cannot be trivially applied, though the complexity is sufficiently low as to be acceptable and portable:
zfs list -rH -o name pool1 | xargs -L 1 -I"{}" bash -c "mkdir -p {} ; mount -t zfs {}@mysnap1 {}"
which produces output like:In case someone accidentally tries to run something like that, you will get errors like the following (the zfs error happens if the target doesn't exist):
Further attempts would probably involve trying to use some kind of overlay filesystem, similar to what's probably going on in https://gist.github.com/jhujhiti/ea15bdb11acd1165cd4d .
Proposal
To start off I propose the following possible semantics:
*Possible failure modes are total failure, which is not warranted, or partial failure with notification. Present datasets should be accessible, I don't know what would be a good manner to notify without causing total failure of a tool trying to access the missing snapshot directory (arguably one would want the tool to fail, and then to exclude the directory if the admin so desires?). Additional possibilities include "locking" things like mounts normally do and simply preventing deletion and such until the mount is released. These manner of issues probably warrant deeper and more systematic treatment.
** How would this behave when snapshots come and go?
I assume the zfs system is what can derive the structure from the hierarchical mount points in properties in the pool, and the snapshots themselves don't depend/know in any way about other snapshots that belong to the recursive set?
Workarounds
Solutions in the interim; open to suggestions.
I just made these terrifying scripts based on a suggestion to use clones, though this doesn't give the benefit of being read only and avoiding scary operations: noone should use these without thorough testing
Search keywords
mounting snapshots recursively, mounting recursive snapshots, snapshot recursive mount, mount recursive snapshot