The easiest way to bootstrap a self-hosted High Availability Kubernetes cluster. A fully automated HA k3s etcd install with kube-vip, MetalLB, and more. Build. Destroy. Repeat.
After modifying the 'local-path' storage class to default: false, it should stay like that after k3s restart.
Current Behavior
If I modify 'local-path' storage class, it's overwritten after k3s restart.
Steps to Reproduce
patch the 'local-path' storage class with storageclass.kubernetes.io/is-default-class: 'false'
On the master node, restart k3s
manifests are recreated and default class is back to true
Context (variables)
Running 3 ubuntu VM in proxmox with 1 master and 2 workers
Hardware:
a single 1235U nuc
Possible Solution
The issue seem to be caused by k3s recreating the manifests files that we delete in the ansible.
For the moment, after running ansible the first time, I've set --disable local-storage flag on the master node to prevent recreating local-storage.yaml file.
Expected Behavior
After modifying the 'local-path' storage class to
default: false
, it should stay like that after k3s restart.Current Behavior
If I modify 'local-path' storage class, it's overwritten after k3s restart.
Steps to Reproduce
storageclass.kubernetes.io/is-default-class: 'false'
Context (variables)
Running 3 ubuntu VM in proxmox with 1 master and 2 workers
Hardware: a single 1235U nuc
Possible Solution
The issue seem to be caused by k3s recreating the manifests files that we delete in the ansible.
For the moment, after running ansible the first time, I've set
--disable local-storage
flag on the master node to prevent recreating local-storage.yaml file.Another solution seem to modify the manifest file instead of deleting it ( https://github.com/k3s-io/k3s/issues/3441 )