k3s-io / k3s

Lightweight Kubernetes
https://k3s.io
Apache License 2.0
26.62k stars 2.24k forks source link

leader-elected etcd controllers not consistently functional when leader election/lease mismatches occur #10046

Open Oats87 opened 2 weeks ago

Oats87 commented 2 weeks ago

Environmental Info: K3s Version:

k3s version v1.29.4+k3s1 (94e29e2e)
go version go1.21.9

Node(s) CPU architecture, OS, and Version: Linux <snipp> 6.2.0-39-generic #40-Ubuntu SMP PREEMPT_DYNAMIC Tue Nov 14 14:18:00 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

Cluster Configuration: 3 servers, all run with --cluster-init=true

Describe the bug: In cases where K3s is run with etcd and leader election, there is a possibility that certain etcd-related controllers stop operating as expected in the case where lease/leader election becomes mismatched.

Steps To Reproduce: On all 3 server nodes:

curl https://get.k3s.io | INSTALL_K3S_SKIP_START=true INSTALL_K3S_SKIP_ENABLE=true sh -
mkdir -p /etc/rancher/k3s
cat << EOF > /etc/rancher/k3s/config.yaml
cluster-init: true
token: <some-token>
EOF

On nodes 2 and 3, you also do: echo "server: https://:6443" >> /etc/rancher/k3s/config.yaml

Then, start k3s i.e. systemctl start k3s

Once K3s is running, in order to specifically see the problem, create a k3s-etcd-snapshot-extra-metadata configmap i.e.

kubectl create configmap -n kube-system k3s-etcd-snapshot-extra-metadata --from-literal=test=ing

Then, create snapshots on various nodes i.e. k3s etcd-snapshot save

Observe that k3s-etcd-snapshots configmap has a corresponding number of snapshots, i.e. kubectl get configmap -n kube-system with the DATA figure matching the number of expected snapshots.

Now, force a lease overturn by kubectl edit lease k3s-etcd -n kube-system and change to a node that is NOT the holder for the kubectl get lease k3s -n kube-system lease. Once this is done, log into the node you changed the k3s-etcd lease to and systemctl restart k3s. After this, kubectl get leases -n kube-system should show mismatched lease holders for k3s and k3s-etcd.

Try to take another snapshot k3s etcd-snapshot save and observe that k3s never adds this snapshot to the k3s-etcd-snapshots configmap.

Expected behavior: The controllers for etcd operate on any lease holder.

Actual behavior: If the controllers for etcd are leased out to a different holder than the holder for k3s, the controllers will not operate correctly.

Additional context / logs: On the new lease holder, it's possible to see the controllers handlers being registered, but there is no reactivity on the node.

Apr 29 21:50:10 ck-ub2304-b-2 k3s[1448]: time="2024-04-29T21:50:10Z" level=info msg="Starting managed etcd node metadata controller"
Apr 29 21:50:10 ck-ub2304-b-2 k3s[1448]: time="2024-04-29T21:50:10Z" level=info msg="Starting managed etcd apiserver addresses controller"
Apr 29 21:50:10 ck-ub2304-b-2 k3s[1448]: time="2024-04-29T21:50:10Z" level=info msg="Starting managed etcd member removal controller"
Apr 29 21:50:10 ck-ub2304-b-2 k3s[1448]: time="2024-04-29T21:50:10Z" level=info msg="Starting managed etcd snapshot ConfigMap controller"

I have debugged this to what I believe is a missed call to sc.Start(ctx) in the -etcd leader election callback list. As per comment here: https://github.com/k3s-io/k3s/blob/0981f0069deaf2ba405cabbfea89d32d9d8e5364/pkg/server/server.go#L170-L174 for apiserverControllers, sc.Start is called as additional informer caches must be started for newly registered handlers, but this occurs for the version.Program i.e. k3s LeaderElectedClusterControllerStarts here: https://github.com/k3s-io/k3s/blob/0981f0069deaf2ba405cabbfea89d32d9d8e5364/pkg/server/server.go#L136-L137

Within the corresponding version.Program+"-etcd" LeaderElectedClusterControllerStarts, there is no such sc.Start call in any of the call backs defined.

A quick workaround for this is to "follow" the lease holder for the k3s lease to the holder of k3s-etcd, i.e. kubectl edit lease -n kube-system k3s and change the holder to the current k3s-etcd holder. If on an older version of K3s where Lease objects are not in use for leader election, the same concept can be applied to the corresponding annotation on the ConfigMap object in the kube-system namespace

As the specific controller I am running into issues with is operating off of EtcdSnapshotFile objects and only started when there is an k3s-etcd-snapshot-extra-metadata configmap in kube-system, it is not surprising that this specific case was missed, but I believe it should be added to ensure compatibility with Rancher Provisioning.

It seems this issue was introduced with the introduction of the use of EtcdSnapshotFile CRs.

brandond commented 2 weeks ago

I think incorrect behavior in the k3s-etcd controllers probably could have happened prior to the ETCDSnapshotFile changes, although the symptoms would have been limited to just the etcd cluster membership management stuff (delete/pre-delete) not working right.

The original introduction of the missing informer start bug would have been in https://github.com/k3s-io/k3s/pull/6922 - not in https://github.com/k3s-io/k3s/pull/8064 - although that did make the snapshot configmap sync depend on the controllers being started, whereas previously it was just done ad-hoc as part of the snapshot save process.

brandond commented 2 weeks ago

Thanks for the report - linked PR should fix this for May releases.

orangedeng commented 1 week ago

Do we really need two leases, k3s and k3s-etcd? Seems like both the k3s controllers and the k3s-etcd controllers will be registered in the node with !e.config.DisableAPIServer(APIServer enabled) when using embeddedDB.

Refer to: https://github.com/k3s-io/k3s/blob/14549535f13c63fc239ba055d36d590e68b01503/pkg/etcd/etcd.go#L617-L623 and https://github.com/k3s-io/k3s/blob/14549535f13c63fc239ba055d36d590e68b01503/pkg/server/server.go#L135-L140

brandond commented 1 week ago

Yes, ideally we would continue to allow these to be split up so that the work isn't always loaded onto a single server. I don't think we're interested in merging them back into a single controller at the moment.