Closed liyimeng closed 2 weeks ago
I see we have allow this for etcdagent node, can it be extended to normal agent node? base on resource like this I would consider this as a case that k3s break k8s conformance
This is a duplicate of https://github.com/k3s-io/k3s/issues/1686
K3s agents do not start the container runtime or kubelet until a server is reachable to provide up-to-date certificates and configuration. Because the kubelet isn't started yet, there is no issue with Kubernetes conformance - conformance places no requirements on how a distribution operates prior to startup of Kubernetes itself, or what startup dependencies a distribution enforces.
Although there have been discussions on this topic in the past, we are not planning to relax this requirement at this time.
Environmental Info: K3s Version:
1.28.6
Node(s) CPU architecture, OS, and Version:
4 core arm64 Cluster Configuration:
1 master + 1 worker Describe the bug:
I have a static pod running in each node, both nodes went into failure for a power lost. For some reason, the master node was not able to recovery, but worker node boot back as normal. However, k3s service stuck at
hence my static pod was not able to come back.
I am wondering if this is an expected behaviour. Could it will be nice to have containerd and kubelet up and run, then static pod start. Let the kubelet try to connecting back when master is available later on?
Steps To Reproduce:
k3s service on work node start containerd, kubelet, and static pod, keep trying to reach master until master node is online.
Actual behavior:
k3s service on work node failed to start
Additional context / logs: