rancher / k3os

Purpose-built OS for Kubernetes, fully managed by Kubernetes.
https://k3os.io
Apache License 2.0
3.5k stars 403 forks source link

Drive and RAID mounting documentation #646

Open FlexibleToast opened 3 years ago

FlexibleToast commented 3 years ago

Is your feature request related to a problem? Please describe. I'm having trouble understanding how to mount a drive at boot. I've created a mdadm raid and can't seem to find good documentation on how to get it to mount.

Describe the solution you'd like A clear and concise description of how to create the proper startup file or cloud-init to mount a secondary drive or raid array on boot.

Describe alternatives you've considered I've looked at cloud-init resources and asked on the Rancher Slack channel.

Additional context I'm using k3os v0.19.5-rc.1 because I want the newer k3s with embedded etcd (I was previously testing this with Ubuntu 20.04 as the base os). According to the docs the configurations are located at /k3os/system/config.yaml, /var/lib/rancher/k3os/config.yaml, and /var/lib/rancher/k3os/config.d/* with the latter being where you should make changes on a running system. This directory does not exist, is this still the appropriate location? I can't seem to find it now, but somewhere I read cloud-init configuration for mounts supported the first 4 options. So, I attempted to make:

mounts:
- [ md0, /var/lib/longhorn, ext4, "noatime,nodiratime,data=writeback" ]

That didn't actually mount the array. Also running lsblk it's plain to see the os does not even see the array. Running sudo mdadm --assemble --scan it sees the array and I can manually mount it at that point with a mount /dev/md0 /var/lib/longhorn command. What I can't seem to figure out is how I get this to mount on boot. I don't imagine that mounting more drives is a fringe use case and with Harvester I imagine it's actually necessary.

FlexibleToast commented 3 years ago

Found workaround of adding some commands to my config.yaml

write_files:
  - path: /etc/fstab
    content: |-
      /dev/cdrom    /media/cdrom      iso9660 noauto,ro 0 0
      /dev/usbdisk  /media/usb        vfat    noauto,ro 0 0
      /dev/md0      /var/lib/longhorn ext4    noatime,nodiratime,data=writeback,nofail 0 0
boot_cmd:
  - mdadm --assemble --scan
run_cmd:
  - mkdir -p /var/lib/longhorn
hazcod commented 3 years ago

I'm wondering if the same would be true for ZFS.

UnstoppableMango commented 2 years ago

The workaround worked for me, but only after a reboot. On first boot after install /etc/fstab was written correctly but drives were not mounted.

I agree it would be nice to have some documentation on the intended configuration for this use case.

ivan98 commented 2 years ago
boot_cmd:
  # move loadkmap to boot sys-level so that localmount can be started correctly
  - "rc-update del loadkmap sysinit"
  - "rc-update add loadkmap boot"
  - "rc-update add mdadm boot"
  - "rc-update add mdadm-raid boot"
  # start crond so that log rotate is enabled
  - "rc-update add crond default"
  # if a mount is defined as critical, it will do a hard fail if it cannot be mounted
  - "sed -i 's|^#critical_mounts.*|critical_mounts=\"/data\"|' /etc/conf.d/localmount"
  # my DHCP is slow, so wait for it by waiting for a ping test to Internet, before writing /etc/issue
  - "sed -i 's/^#include_ping_test.*/include_ping_test=yes/' /etc/conf.d/net-online"
  # log startup to /var/log/rc.log
  - "sed -i 's/^#rc_logger.*/rc_logger=\"YES\"/' /etc/rc.conf"
  # setup my timezone
  - "ln -vs /usr/share/zoneinfo/Asia/Singapore /etc/localtime"
  - "echo 'Asia/Singapore' > /etc/timezone"
write_files:
  - path: /etc/fstab
    content: |-
      /dev/cdrom   /media/cdrom iso9660 noauto,ro 0 0
      /dev/usbdisk /media/usb   vfat    noauto,ro 0 0
      /dev/md0     /data        ext4    acl,noatime,user_xattr 0 2
      #
    owner: root
    permissions: '0644'
  - path: /etc/mdadm.conf
    content: |-
      ARRAY /dev/md0 metadata=1.2 spares=1 name=server_name:0 UUID=7274ffca:02078f43:379651fa:91de8c6d
      #
    owner: root
    permissions: '0644'

I use the mdadm and localmount startup scripts instead of running the binaries myself. localmount has the ability to auto create the mounpoint if it does not exists, whereas mdadm can/is a daemon to monitor for changes to the MD.