Closed alaricljs closed 7 years ago
Details on "does not function" - systemd appears to get to /var before zfs does and throws down some directories and starts writing data. Since there is now contents in /var zfs refuses to mount the /var filesystem. The sub-volumes I have for /var do mount since they land in empty paths systemd creates but does not populate with anything prior to zfs going through its mount routine.
So I've read up on systemd and at the least /var would need to be mounted prior to journald starting up. I haven't figured out how possible that would be. I've reviewed my reasoning behind having /var separate and while \nice\ it's not necessary and I've converted to a unified root.
@alaricljs Nope, you are wrong here. sd-zfs mounts all subdatasets from your root (default, opt, var, var/cache). They are mountet relative to /sysroot (/sysroot/opt, /sysroot/var/, ...), and then systemd changes root to /sysroot and runs the system systemd. The systemd instance in the initrd writes its logs to /var/log which is not part of /sysroot.
Hello,
i'm really sorry to comment in this old issue. But i'm having a slightly, but similar Problem as alaricljs.
I have the same layout in zfs list and the submounts /usr and /var are NOT getting a "zfs mount" on boot. Which logs do you need from me to understand this Problem better?
Best regards Christian
@dasJ can correct me if I am wrong, but I want to clarify what I think was a misunderstanding. It looks like alaricljs has a container dataset at rpool/root
(presumably for boot environments), a file-system root at rpool/root/default
(an important distinction), and is concerned about datasets which are descendents of rpool/root
and NOT of rpool/root/default
. It is my understanding that it is just the datasets which are descendants of the dataset with the bootfs
property will be recursively mounted. Thus, even though rpool/root/var
has a mountpoint of /var
, which is below the /
mountpoint of rpool/root/default
(the bootfs dataset), it will not be mounted according to this scheme.
Now, it is within the functionality of OpenZFS to handle this by allowing the user to identify certain datasets that will be mounted after the descendant hierarchy is traversed, but this has not been implemented in sd-zfd
. This is something which is of interest to me as well so would like to see it implemented.
As alaricljs mentioned, the main issue is that /var
or /usr
get mounted too late in the boot process even if you have enabled the units like zfs*.{target,service}
. Then, since they get populated in the early boot process, ZFS fails to mount them as the directory is not empty. One way around this (though it has its own issues and I dont recommend it) is to set the overlay
property on the dataset in question. Then it will happily mount over the non-empty mountpoint.
Depending on what you are trying to do (e.g., using boot environments or not), the simplest thing for you to do may be just to make the datasets for /usr
and /var
be descendants of your bootfs
dataset (the one mounted at /
and with that property set). Unless you are making heavy use of boot environments I cannot think of any benefit to not having that be the case. This is the solution I am currently satisfied with for my own purposes.
Hi @guygma,
i'm sorry to disappoint you. I went back and reinstalled my system with a new arch os. I choose the "old" non-systemd boot in initramfs and found another bug in grub, which has been bugging me from the beginning.
I cannot clarify or explain any more details now as the old system is gone by now.
Greetings Christian
It does not appear that sd-zfs currently supports a separate non-legacy /var DSN. It would be nice if it did, although I don't know what would be required. This is my mountable OS DSN list:
As shown it does not function with sd-zfs (nor without) in order for this to work properly I must make /var (and children) a legacy mount in /etc/fstab.