openzfs / zfs

OpenZFS on Linux and FreeBSD
https://openzfs.github.io/openzfs-docs
Other
10.65k stars 1.75k forks source link

/etc/zfstab as a solution to many systems on zfs as ROOT and canmount property #16011

Open dandudikof opened 8 months ago

dandudikof commented 8 months ago

Propose a simple solution to the hassles of dealing with many
systems on zfs as ROOT and the sets they should or should not mount.

https://github.com/openzfs/zfs/issues/14352 https://github.com/openzfs/zfs/issues/15990

Running all systems with ZFS_MOUNT='no' in /etc/defaults/zfs is
already required when using many other systems on zfs ROOT, since all sets outside of current root will be mounted automaticaly (not desiered) unless set as canmount=noauto or mountpoint=legacy (pain to do on every set created) also the need to set canmount=noauto on every child of other systems roots

and using mountpoint=legacy is a messy and tedious option. mountpoint=legacy needs to be set on all newly created sets and that set entered in to current fstab and all other systems fstab files that wish to share those sets (needs reboot/edit or mount/edit). (not functional if a non root user is creating sets and cannot edit fstab)

Hence the need for a very simple zfstab file (with recursive functionality) with a simple zfstab init

(non root user still cannot edit zfstab, but recursive auto mounts allow him to make sets under parents that get recursive mounts, and they will be mounted (on reboot at least ), same goes for other systems)

(even simpler if -r is allowed as first argument in zfstab) (can be made more complex if exclude is required on recursive sets)

zfstab file (no empty lines)

pool/tmp
pool/srv/share
-r pool/srv/users

zfstab init file

#!/bin/sh 
#zfstab V0.1
#
### BEGIN INIT INFO
# Provides:          zfstab
# Required-Start:    mtab zfs-import
# Required-Stop:     
# Default-Start:     S
# Default-Stop:      
# X-Start-Before:    
# X-Stop-After:      
# Short-Description: zfstab
# Description:       zfstab for root on zfs systems
### END INIT INFO

case "$1" in

    start)

        while read line ;do

            for set in $(zfs list -H -o name $line) ;do

                #mountpoint check here
                echo "zfs mount $set"
                zfs mount $set

            done

        done < /etc/zfstab
    ;;

esac

willing to add extra checks and if anyone shows interest (minimal functionality works for me without extra checks)

samvde commented 8 months ago

For my current setup at home I have solved this issue using systemd "system-environment-generators".

The environment generator manipulates the files in /etc/zfs/zfs-list.cache based on properties set on the root filesystem. It imports the required zpool if needed and enables canmount for the datasets in question, and disables canmount for datasets at the same level. For example it could enable bpool/ubuntu and doing that disable bpool/debian for situations where you typically have /boot in a separate pool, but keep /userdata available for all linux installations in the same rpool.

After the environment generator ran, the systemd mount generator picks the result up, creates the mount units and thus takes care of unmounting everything clean during shutdown.

I feel a lot of the tooling for linux exists, but is perhaps not exploited to the fullest yet.

dandudikof commented 8 months ago

I am looking at this from a non systemd based distribution point of view. (should have been clearer about that, sysv init should have been a clue) Trying to get a complicated setup under control with simple tooling.

(home datasets was something i have not used before with this setup, and decided to test, just need to reorder the start/stop sequence. maybe it was my ssh session keeping /home/user busy)

Adito5393 commented 2 weeks ago

For my current setup at home I have solved this issue using systemd "system-environment-generators".

The environment generator manipulates the files in /etc/zfs/zfs-list.cache based on properties set on the root filesystem. It imports the required zpool if needed and enables canmount for the datasets in question, and disables canmount for datasets at the same level. For example it could enable bpool/ubuntu and doing that disable bpool/debian for situations where you typically have /boot in a separate pool, but keep /userdata available for all linux installations in the same rpool.

After the environment generator ran, the systemd mount generator picks the result up, creates the mount units and thus takes care of unmounting everything clean during shutdown.

I feel a lot of the tooling for linux exists, but is perhaps not exploited to the fullest yet.

Could you provide your system-environment-generators setup? I'm interested in doing the same but with the home dataset. I don't fully grasp how changing some env vars will trigget the zfs mount. Or do you miss-use it to run a bash script to call the zfs mount directly?