Open dandudikof opened 8 months ago
For my current setup at home I have solved this issue using systemd "system-environment-generators".
The environment generator manipulates the files in /etc/zfs/zfs-list.cache based on properties set on the root filesystem. It imports the required zpool if needed and enables canmount for the datasets in question, and disables canmount for datasets at the same level. For example it could enable bpool/ubuntu and doing that disable bpool/debian for situations where you typically have /boot in a separate pool, but keep /userdata available for all linux installations in the same rpool.
After the environment generator ran, the systemd mount generator picks the result up, creates the mount units and thus takes care of unmounting everything clean during shutdown.
I feel a lot of the tooling for linux exists, but is perhaps not exploited to the fullest yet.
I am looking at this from a non systemd based distribution point of view. (should have been clearer about that, sysv init should have been a clue) Trying to get a complicated setup under control with simple tooling.
(home datasets was something i have not used before with this setup, and decided to test, just need to reorder the start/stop sequence. maybe it was my ssh session keeping /home/user busy)
For my current setup at home I have solved this issue using systemd "system-environment-generators".
The environment generator manipulates the files in /etc/zfs/zfs-list.cache based on properties set on the root filesystem. It imports the required zpool if needed and enables canmount for the datasets in question, and disables canmount for datasets at the same level. For example it could enable bpool/ubuntu and doing that disable bpool/debian for situations where you typically have /boot in a separate pool, but keep /userdata available for all linux installations in the same rpool.
After the environment generator ran, the systemd mount generator picks the result up, creates the mount units and thus takes care of unmounting everything clean during shutdown.
I feel a lot of the tooling for linux exists, but is perhaps not exploited to the fullest yet.
Could you provide your system-environment-generators
setup? I'm interested in doing the same but with the home dataset.
I don't fully grasp how changing some env vars will trigget the zfs mount. Or do you miss-use it to run a bash script to call the zfs mount directly?
Propose a simple solution to the hassles of dealing with many
systems on zfs as ROOT and the sets they should or should not mount.
https://github.com/openzfs/zfs/issues/14352 https://github.com/openzfs/zfs/issues/15990
Running all systems with ZFS_MOUNT='no' in /etc/defaults/zfs is
already required when using many other systems on zfs ROOT, since all sets outside of current root will be mounted automaticaly (not desiered) unless set as canmount=noauto or mountpoint=legacy (pain to do on every set created) also the need to set canmount=noauto on every child of other systems roots
and using mountpoint=legacy is a messy and tedious option. mountpoint=legacy needs to be set on all newly created sets and that set entered in to current fstab and all other systems fstab files that wish to share those sets (needs reboot/edit or mount/edit). (not functional if a non root user is creating sets and cannot edit fstab)
Hence the need for a very simple zfstab file (with recursive functionality) with a simple zfstab init
(non root user still cannot edit zfstab, but recursive auto mounts allow him to make sets under parents that get recursive mounts, and they will be mounted (on reboot at least ), same goes for other systems)
(even simpler if -r is allowed as first argument in zfstab) (can be made more complex if exclude is required on recursive sets)
zfstab file (no empty lines)
zfstab init file
willing to add extra checks and if anyone shows interest (minimal functionality works for me without extra checks)