nix-community / disko

Declarative disk partitioning and formatting using nix [maintainers=@Lassulus @Enzime @iFreilicht]
MIT License
1.76k stars 192 forks source link

Rootfs mount order issue (tmpfs + zfs) #270

Closed yangm97 closed 1 day ago

yangm97 commented 1 year ago

The problem

disko mounts zfs pool before the rootfs mount (which is tmpfs in my case), shadowing the zfs native mounts. Example log:

disko --mode mount hosts/sisyphus/disko-config.nix
+ zpool list sisyphus                                                                                                                                                                                             
+ zpool import -l -R /mnt sisyphus                                                                                                                                                                                
0 / 0 keys successfully loaded                                                                                                                                                                                    
+ findmnt tmpfs /mnt/
+ mount -t tmpfs none /mnt/ -o mode=755 -o X-mount.mkdir
+ findmnt /dev/disk/by-id/usb-USB_SanDisk_3.2Gen1_0901b343af36a38f56915e6ae7f6378d8e795333704cbe0c39e64a2528c00d94de2e000000000000000000003430e0f8ff1f2d2067558107c92aa62d-0:0-part1 /mnt/boot
+ mount /dev/disk/by-id/usb-USB_SanDisk_3.2Gen1_0901b343af36a38f56915e6ae7f6378d8e795333704cbe0c39e64a2528c00d94de2e000000000000000000003430e0f8ff1f2d2067558107c92aa62d-0:0-part1 /mnt/boot -t vfat -o defaults -o X-mount.mkdir
+ findmnt sisyphus/safe/persist /mnt/persist
+ mount sisyphus/safe/persist /mnt/persist -o X-mount.mkdir -o defaults -t zfs

Ideal scenario

disko should mount the root before importing the zfs pool.

BTW I'm only using one legacy mount to workaround impermance desire for a mount which has neededForBoot set from the nix configuration (so it doesn't quite "see" zfs native mounts).

Workarounds

From the log output, it appears to me that using legacy mounts for all zfs datasets would work.

Lassulus commented 1 year ago

not sure if thats a disko or a nixpkgs problem. maybe posting a config which causes the breakage will help me investigate

yangm97 commented 1 year ago

Sorry I think I forgot to save the troubling config but this should at least be close to reproducing the issue. If I remember correctly, I ended up using legacy mountpoints for all datasets in order to workaround the ordering issue.

{ ... }: {
  disko.devices = {
    nodev = {
      "/" = {
        fsType = "tmpfs";
        mountOptions = [
          "mode=755"
        ];
      };
    };
    disk = {
      x = {
        type = "disk";
        device = "/dev/disk/by-id/usb-USB_SanDisk_3.2Gen1_0901b343af36a38f56915e6ae7f6378d8e795333704cbe0c39e64a2528c00d94de2e000000000000000000003430e0f8ff1f2d2067558107c92aa62d-0:0";
        content = {
          type = "table";
          format = "gpt";
          partitions = [
            {
              name = "ESP";
              start = "0";
              end = "1024MiB";
              fs-type = "fat32";
              bootable = true;
              content = {
                type = "filesystem";
                format = "vfat";
                mountpoint = "/boot";
              };
            }
            {
              name = "zfs";
              start = "1024MiB";
              end = "100%";
              content = {
                type = "zfs";
                pool = "sisyphus";
              };
            }
          ];
        };
      };
    };
    zpool = {
      sisyphus = {
        type = "zpool";
        options = {
          ashift = "13";
          autotrim = "on";
        };
        rootFsOptions = {
          atime = "off";
          compression = "zstd";
          "com.sun:auto-snapshot" = "false";
          dedup = "on";
          xattr = "sa";
          mountpoint = "none";
        };
        mountRoot = "/mnt";
        postCreateHook = "zfs snapshot -r sisyphus@blank";

        datasets = {
          local.type = "zfs_fs";
          "local/reserved" = {
            type = "zfs_fs";
            options.mountpoint = "none";
            options.reservation = "12G";
          };
          "local/nix" = {
            type = "zfs_fs";
            # options.mountpoint = "/nix";
            options.mountpoint = "legacy";
            mountpoint = "/nix";
          };
          safe.type = "zfs_fs";
          "safe/persist" = {
            type = "zfs_fs";
            options."com.sun:auto-snapshot" = "true";
            # options.mountpoint = "/persist";
            options.mountpoint = "legacy";
            mountpoint = "/persist";
          };
          "safe/home" = {
            type = "zfs_fs";
            options.recordsize = "1M";
            # options.mountpoint = "/home";
            options.mountpoint = "legacy";
            mountpoint = "/home";
            options."com.sun:auto-snapshot" = "true";
          };
          "safe/home/andre" = {
            type = "zfs_fs";
            options.mountpoint = "legacy";
            mountpoint = "/home/andre";
          };
          "safe/home/yan" = {
            type = "zfs_fs";
            options.mountpoint = "legacy";
            mountpoint = "/home/yan";
          };
        };
      };
    };
  };
}
Enzime commented 10 months ago

I think this issue should be fixed by #474

iFreilicht commented 1 day ago

As that PR got merged, I'll close this issue for now. Feel free to re-open if you're still experiencing this issue.