canonical / microk8s

MicroK8s is a small, fast, single-package Kubernetes for datacenters and the edge.
https://microk8s.io
Apache License 2.0
8.53k stars 773 forks source link

Microk8s 1.18 Change default host path storage location #1956

Closed thwhk closed 1 year ago

thwhk commented 3 years ago

Hi all, I want to use my usb drive as default storage location. I set the usb drive mount point to /mnt/microk8s_usb. I followed this https://github.com/ubuntu/microk8s/issues/463#issuecomment-491285745 to change /var/snap/microk8s/current/args/containerd as follow:

--config ${SNAP_DATA}/args/containerd.toml
--root /mnt/microk8s_usb/var/lib/containerd
--state /mnt/microk8s_usb/run/containerd
--address ${SNAP_COMMON}/run/containerd.sock

After restarted Microk8s, pods status are always ContainerCreating, but the node is on Ready state When I describe the pods, it shows the following message:

Warning  FailedCreatePodSandBox  2m54s (x17 over 6m41s)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container: failed to create containerd task: failed to mount rootfs component &{overlay overlay [workdir=/mnt/microk8s_usb/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/152/work upperdir=/mnt/microk8s_usb/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/152/fs lowerdir=/mnt/microk8s_usb/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs]}: invalid argument: unknown

Am I did something wrong? Thank you. inspection-report-20210131_070151.tar.gz

nlnjnj commented 3 years ago

After research the source code, I figured that currently if you want to change the deafult path you could only change the ENV SNAP_COMMON, but this will also move all the dirs under SNAP_COMMON.

If you want change the default-storage only you could re-depoly the hostpath-provisioner.

hostpath-provisioner.yaml

Replace the ·/data/default-storage· with your local path and change the image cdkbot/hostpath-provisioner-amd64:1.0.0 for your ARCH.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hostpath-provisioner
  labels:
    k8s-app: hostpath-provisioner
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 0
  selector:
    matchLabels:
      k8s-app: hostpath-provisioner
  template:
    metadata:
      labels:
        k8s-app: hostpath-provisioner
    spec:
      serviceAccountName: microk8s-hostpath
      containers:
        - name: hostpath-provisioner
          image: cdkbot/hostpath-provisioner-amd64:1.0.0
          env:
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: PV_DIR
              value: /data/default-storage
          #            - name: PV_RECLAIM_POLICY
          #              value: Retain
          volumeMounts:
            - name: pv-volume
              mountPath: /data/default-storage
      volumes:
        - name: pv-volume
          hostPath:
            path: /data/default-storage

Then

sudo microk8s kubectl delete -f hostpath-provisioner.yaml
sudo microk8s kubectl create -f hostpath-provisioner.yaml
nlnjnj commented 3 years ago

This hack seems have some issues, maybe I missed somewhere.

balchua commented 3 years ago

What i usually do is i clone /fork the project and build MicroK8s for myself.
I can change the configuration to my liking. The instructions to build Microk8s snap is here. https://github.com/ubuntu/microk8s/blob/master/docs/build.md

Thram commented 2 years ago

As a quick fix. you can edit the deployment hostpath-provisioner and update only the volume's hostPath

> kubectl -n kube-system edit deploy hostpath-provisioner

###
# ...
###
      volumes:
        - name: pv-volume
          hostPath:
            path: /data/default-storage
skol101 commented 2 years ago

Is there a way to change not only the path but actually type of storage? Because hostPath isn't support by Velero backup.

jinjin123 commented 2 years ago

version: ubuntu22.04 change host path happen error... same partition with the old disk

Jul 6 18:46:42 jin microk8s.daemon-containerd[19905]: + exec /snap/microk8s/3272/bin/containerd --config /var/snap/microk8s/3272/args/containerd.toml --address /var/snap/microk8s/common/run/containerd.sock --root /mnt/raid1/microk8s/var/lib/containerd --state /mnt/raid1/microk8s/run/containerd

Jul 6 18:46:42 jin microk8s.daemon-containerd[19905]: time="2022-07-06T18:46:42.996927642+08:00" level=info msg="starting containerd" revision=3df54a852345ae127d1fa3092b95168e4a88e2f8 version=v1.5.11 Jul 6 18:46:43 jin microk8s.daemon-containerd[19905]: time="2022-07-06T18:46:43.052813404+08:00" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 18:46:43 jin microk8s.daemon-containerd[19905]: time="2022-07-06T18:46:43.052931882+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 18:46:43 jin microk8s.daemon-containerd[19905]: time="2022-07-06T18:46:43.055696445+08:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.0-39-generic\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 18:46:43 jin microk8s.daemon-containerd[19905]: time="2022-07-06T18:46:43.055774026+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 18:46:43 jin microk8s.daemon-containerd[19905]: time="2022-07-06T18:46:43.056245366+08:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /mnt/raid1/microk8s/var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 18:46:43 jin microk8s.daemon-containerd[19905]: time="2022-07-06T18:46:43.056294506+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 18:46:43 jin microk8s.daemon-containerd[19905]: time="2022-07-06T18:46:43.056349049+08:00" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 6 18:46:43 jin microk8s.daemon-containerd[19905]: time="2022-07-06T18:46:43.056376198+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 18:46:43 jin microk8s.daemon-containerd[19905]: time="2022-07-06T18:46:43.056429905+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 18:46:43 jin microk8s.daemon-containerd[19905]: time="2022-07-06T18:46:43.056671386+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 18:46:43 jin microk8s.daemon-containerd[19905]: time="2022-07-06T18:46:43.057108948+08:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /mnt/raid1/microk8s/var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 18:46:43 jin microk8s.daemon-containerd[19905]: time="2022-07-06T18:46:43.057158042+08:00" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 18:46:43 jin microk8s.daemon-containerd[19905]: time="2022-07-06T18:46:43.057196126+08:00" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 6 18:46:43 jin microk8s.daemon-containerd[19905]: time="2022-07-06T18:46:43.057221884+08:00" level=info msg="metadata content store policy set" policy=shared Jul 6 18:46:43 jin microk8s.daemon-containerd[19905]: time="2022-07-06T18:46:43.057455420+08:00" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 18:46:43 jin microk8s.daemon-containerd[19905]: time="2022-07-06T18:46:43.057502121+08:00" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1

stale[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.