Closed dasJ closed 3 years ago
I don't really see a reason for doing this, please comment if you do
Maybe this is more useful than I expected.
I was about to open an issue with a similar goal when I found this. During setting a new install at work, I came across a config file of which I wasn't previously aware: /etc/default/zfs
(kind of oddly named, tbh). It is part of ZOL and defines an environment variable called ZFS_INITRD_ADDITIONAL_DATASETS
. I had been having trouble when I want /var
to be in the same pool as the /
mountpoint but not a direct descendant from the root filesystem dataset. There are various issues that one has to avoid here and I solved some of them, but the result remained sufficiently dirty that I was unsatisfied.
The basic root of the issue seems to be that once the dataset with the bootfs
property is identified, only its child datasets are then mounted. For some mountpoints, e.g. /opt
or /tmp
, this isn't a big issue if they get skipped here and can be mounted later on in the boot process. However, for /var
, this doesn't work because (even if you delete everything under /var
within the root dataset) additional files are created during the boot process so the mount fails later on due to the mountpoint not being empty. My working solution was to set the overlay
property but this is still not ideal.
The environment variable I mentioned above, ZFS_INITRD_ADDITIONAL_DATASETS
, is designed specifically to handle this case and it is my strongly held opinion that sd-zfs
should respect this standard environment variable. It is, in fact, possible that we would not need /etc/initcpio/sd-zfs-mount
at all. Moreover, it seems likely that the bulk of the work on this issue may already have been done by the ZOL team, we just have to understand it and integrate it. That would make me very happy.
PS: I neglected mentioning this above because I didn't understand it, but now I think I may... I actually have two pools on most of my systems. A single drive rpool
for rootfs stuff and a separate mirrored upool
that has /home
(mainly because I can't cough up the cash for large enough SSDs and so I have to just use what I've got for the root and live with spinning rust for the big storage). I was going to point out that I never have run into problems here with mounting home, but I think this is because I also always enable the zfs-mount.service
and other zfs-*.service
in systemd. So, I think what is actually happening is that the ZFS kernel module mounts the child datasets of the dataset with the bootfs
property and then the rest of my datasets (e.g., from upool
) get mounted later on in the boot process by the systemd service. Since the /home
directory is empty after a typical arch install (from the perspective of the filesystem housed on rpool
) there are no errors and everything goes smoothly. However, this does not seem like the most canonical way to approach this. I am actually in the process of getting up a new install so will report back if the settings in /etc/default/zfs
are enough to solve this without the need for any systemd mounts after the initrd stage has completed.
Related:
Seems like the ZOL team tried to do something similar but the effort was squandered due to some complications. Reception was generally warm. Their conclusion was to relegate systemd customizations to individual distros, which jives with the approach here.
To implement this, a new file called
/etc/initcpio/sd-zfs-mount
will be added. It will contain all datasets that should be mounted before switching roots.The generator will read this file if it wants to mount just one pool (
root=tank/root
will usually cause it to importtank
only instead of importing all pools. But if datasets from other pools need to be imported as well, it will add them to the import command.The mounter reads the file again. It will mount the required datasets.
The structure of the file shoud look like this:
This would import the two datasets to their specified
mountpoint
value. Whitespaces at the beginning/ending of the line should be ignored.TODO