Closed mbrandeburg closed 9 months ago
Are there any errors in the helper pod or k3s logs?
WaitForFirstConsumer
would indicate that it's waiting for the PVC to be actually used by a pod. Have you created a pod that mounts the PVC?
You can find this in the docs here: https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode
yes, I'm realizing that. I just erased the previous message to re-run my vagrant tests. The issue first appeared when running terraform after a new update to ubuntu, so I was deploying pods alongside the pvc, which caused the helper to appear. Running these quick tests off the example yaml from rancher on the vagrant VM as a test case, I realized indeed I needed to be deploying the pod to cause it to create the pv to back the pvc. Running tests again on 23.10 on my Vagrant VM to confirm that this behavior isn't just confined to the Pi.
oh my, I'm realizing this may be the answer. My TF is riddled with depends on calls, and I'm now suspecting that what has occurred is that deployments prior to the latest update were done from scratch without those calls. The fresh deployment done post-update today is likely the first time I injected depends on for PVCs, which caused this "circular" waiting game -- the deployment never got created because the PVCs weren't ready, but they wouldn't get their backing PV without the deployment first exist. Unsure why helper pods still spawned then, but I'm assuming its a depends on issue with my TF and not actually k3s. (Which certainly would make sense, I kept saying how can I be the first to encounter a bug with 23.10.) My apologies for wasting people's time if this proves to be the case.
can confirm this works on an ubuntu 23.10 image up to date via vagrant. No idea what's up with the raspberry pi, but it's not the issue above. Closing for now, thanks all for the help!
Environmental Info: K3s Version: v1.28.5+k3s1; v1.29.0+k3s1; v1.28.3+k3s2
Node(s) CPU architecture, OS, and Version:
Linux raspberrypi5 6.5.0-1008-raspi #11-Ubuntu SMP PREEMPT_DYNAMIC Wed Nov 22 19:08:26 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
Linux vagrant 6.5.0-14-generic #14-Ubuntu SMP PREEMPT_DYNAMIC Tue Nov 14 14:59:49 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Cluster Configuration:
Describe the bug:
kubectl apply
against the sample here, the pvc remains inpending
. I tried this as well using the longhorn driver assuming perhaps it's just a bug with local-path-provisioner, but alas, it too failed. I would prefer to just use local storage and not use longhorn for storage drivers, but point is that I tried both. I've tried 3 different versions of k3s to no avail: v1.28.5+k3s1; v1.29.0+k3s1; and v1.28.3+k3s2. If anyone can look into this, I'd greatly appreciate it, as persistent storage is important to my ability to deploy applications.Steps To Reproduce:
sudo apt-get update && sudo apt-get upgrade -y
.curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.28.5+k3s1 sh
.Expected behavior:
local-path-provisioner
will make a helper pod.Actual behavior:
Additional context / logs: