Closed yacoob closed 5 years ago
Judging by two threads on the mailing list (1, 2) a configuration like that is indeed discouraged. Luckily, I can still redo the new pool without much problem.
If this configuration is really discouraged, what would be a good place to add a note about it, to prevent people from following down that path? :)
Could you run the command zfs get all magi
.
root=ZFS=magi/
for grub is incorrect.
This is on a system booted from lcl
, with magi
imported via zpool import -R /mnt/flash-oo magi
:
# zfs get all magi
NAME PROPERTY VALUE SOURCE
magi type filesystem -
magi creation Wed Jul 31 19:06 2019 -
magi used 45.4G -
magi available 179G -
magi referenced 1.68G -
magi compressratio 1.59x -
magi mounted yes -
magi quota none default
magi reservation none default
magi recordsize 128K default
magi mountpoint /mnt/flash-oo local
magi sharenfs off default
magi checksum on default
magi compression lz4 local
magi atime on default
magi devices on default
magi exec on default
magi setuid on default
magi readonly off default
magi zoned off default
magi snapdir hidden default
magi aclinherit restricted default
magi createtxg 1 -
magi canmount noauto local
magi xattr sa local
magi copies 1 default
magi version 5 -
magi utf8only on -
magi normalization formD -
magi casesensitivity sensitive -
magi vscan off default
magi nbmand off default
magi sharesmb off default
magi refquota none default
magi refreservation none default
magi guid 13077437144030891943 -
magi primarycache all default
magi secondarycache all default
magi usedbysnapshots 0B -
magi usedbydataset 1.68G -
magi usedbychildren 43.8G -
magi usedbyrefreservation 0B -
magi logbias latency default
magi dedup off default
magi mlslabel none default
magi sync standard default
magi dnodesize auto local
magi refcompressratio 2.02x -
magi written 1.68G -
magi logicalused 71.1G -
magi logicalreferenced 3.05G -
magi volmode default default
magi filesystem_limit none default
magi snapshot_limit none default
magi filesystem_count none default
magi snapshot_count none default
magi snapdev hidden default
magi acltype posixacl local
magi context none default
magi fscontext none default
magi defcontext none default
magi rootcontext none default
magi relatime on local
magi redundant_metadata all default
magi overlay off default
(Without Bpool) someone complained on the mailinglist that a root filesystem on a root dataset broke after #8052. I wrote #8356 (merged May 6) to correct that situation.
It is possible your version doesn't have this fix.
adding zfsdebug=1 to your kernel parameters might give some output that will help us track down the issue.
@ghfields yup, I don't have that patch yet. All right, I'll lay out my new pool better, happy to see that even the edge cases are addressed :)
Thanks for the explanations!
If this configuration is really discouraged, what would be a good place to add a note about it, to prevent people from following down that path? :)
Certainly in any howto regarding ZFS, since using the root dataset of a pool for anything but as a container to inherit properties into child datasets is likely to give problems later.
System information
Describe the problem you're observing
I have a working Debian system with rootfs on ZFS. I'm trying to move
/
to a separate pool right now.Old pool:
and it boots with
root=ZFS=lcl/sys/root
as kernel parameter.New pool:
lilith
is the boot pool,magi
is the root pool. As you can see, I went with top-most dataset being the filesystem in question (/boot
and/
respectively). This in turn makes the rootfs specification tricky; neitherroot=ZFS=magi
norroot=ZFS=magi/
works. I'd rather not rely onroot=zfs:AUTO
to control which pool is being booted, andbootfs
has the same problem.Is this configuration even supported? Or is there a (silent?) assumption that rootfs will not be the top-most dataset?
I do apologise for the horrible puns in the paths/names :D