Environmental Info:
K3s Version:
k3s: v1.27.11-k3s1 and v1.28.5-k3s1 (Tried on both)
Node(s) CPU architecture, OS, and Version:
Linux
Cluster Configuration:
I am running k3s servers on k8s Host clusters as multi tenant.
Describe the bug:
We were running v1.26.12-k3s1 for quite sometime and recently we have upgraded out host clusters as well as k3s to v1.27.9 and v1.27.11-k3s1 respectively. After that I have started facing the pods being stuck into pending state. I have checked the k3s log controller is able to crate the pods successfully but pod goes into pending state.
Controller nginx-deployment-5bc8fcb6c7 created pod nginx-deployment-5bc8fcb6c7-tjxxr
There is one scenario that is quite weird that I am facing currently. Say I am running 3 replicas of nginx-deployment and when the issue occurs I tried to scale the replicas to 5 the newer 2 pods go into pending state. Also at the same time I deleted one of the older nginx pod and described the svc, in svc endpoints it was still showing the older 3 pod IPs.
Environmental Info: K3s Version: k3s: v1.27.11-k3s1 and v1.28.5-k3s1 (Tried on both)
Node(s) CPU architecture, OS, and Version: Linux
Cluster Configuration: I am running k3s servers on k8s Host clusters as multi tenant.
Describe the bug: We were running v1.26.12-k3s1 for quite sometime and recently we have upgraded out host clusters as well as k3s to v1.27.9 and v1.27.11-k3s1 respectively. After that I have started facing the pods being stuck into pending state. I have checked the k3s log controller is able to crate the pods successfully but pod goes into pending state.
There is one scenario that is quite weird that I am facing currently. Say I am running 3 replicas of nginx-deployment and when the issue occurs I tried to scale the replicas to 5 the newer 2 pods go into pending state. Also at the same time I deleted one of the older nginx pod and described the svc, in svc endpoints it was still showing the older 3 pod IPs.
Node Conditions
Steps To Reproduce: It's happening randomly. Not sure, if its reproducible.
Expected behavior: Pod scheduling should happen without any issues i.e. pods should not suck on pending state.
Actual behavior: Pods are being stuck on Pending state and it's happening randomly.