rancher / local-path-provisioner

Dynamically provisioning persistent local storage with Kubernetes
Apache License 2.0
2.08k stars 440 forks source link

Provisioner on k3s switches namespace, config after reboot #232

Closed samstride closed 1 week ago

samstride commented 2 years ago

Hi,

Thanks for maintaining this repo.

I have installed the provisioner on a single node k3s cluster (home lab) using:

kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

The provisioner originally installs itself in namespace local-path-storage.

However, after a node restart (power outage), the provisioner has deployments and configs setup in kube-system and the original deployment fails with an error regarding service account permissions. Sorry, don't have the exact message handy.

Thanks.

derekbit commented 2 years ago

@samstride Can you show the pods in your system by k get pods -A | grep local-path-provisioner? BTW, k3s has already embedded the local-path-provisioner, so you can use it directly.

samstride commented 2 years ago

@derekbit , thanks for responding.

When I installed k3s originally, don't think the local provisioner got installed.

I reinstalled the provisioner in kube-system now.

Here is the output you requested:

kubectl get pods -A | grep local-path-provisioner

kube-system            local-path-provisioner-84bb864455-47fv4        1/1     Running     2 (2d16h ago)   13d

Another thing that happens after a restart, is that the configmap seems to change.

I originally had something like this:

  config.json: |-
    {
            "nodePathMap":[
            {
                    "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
                    "paths":["/opt/local-path-provisioner"]
            }
            ]
    }

Then after a node restart, it changed to this:

 config.json: |-
    {
            "nodePathMap":[
            {
                    "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
                    "paths":["/var/lib/rancher/k3s/storage"]
            }
            ]
    }

There is a small possibility that the k3s version might have upgraded from v1.22.6+k3s1 to v1.22.7+k3s1 before the reboot. Not sure if that might have caused this issue around configmaps.

chenyg0911 commented 1 year ago

some problem with "v1.22.5+k3s1". I update the local-path-config configmap to use the other disk volume other than the default patch. When retart the pods "local-path-provisioner-xxxx", it OK. But when restart the k3s, it restore to default path "/var/lib/rancher/k3s/storage". I also try to update the config under /var/lib/rancher/k3s/server/manifests/local-storage.yaml. the effect is the same, when restart k3s, it also restore to default path.

harryzcy commented 1 year ago

@chenyg0911 The default path in k3s is set by --default-local-storage-path cli argument when starting k3s. If the argument is omitted, it's defaulted to "/var/lib/rancher/k3s/storage".

gb-123-git commented 3 months ago

Is there any way to make the configuration stick other than restart k3s with --default-local-storage-path ? Is there a possibility of creating something like local-storage-custom.yaml that over-rides the default file for path ?

github-actions[bot] commented 1 week ago

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] commented 1 week ago

This issue was closed because it has been stalled for 5 days with no activity.