k3s-io / k3s

Lightweight Kubernetes
https://k3s.io
Apache License 2.0
28.06k stars 2.35k forks source link

etcd controller shall not ignore ghost etcd member in health check #11231

Open mogliang opened 1 week ago

mogliang commented 1 week ago

Environmental Info: K3s Version:

Node(s) CPU architecture, OS, and Version:

Cluster Configuration: k3s cluster with embeded etcd cluster

Describe the bug: Currently, when there is a ghost etcd member (the member which don't belong to any k3s node), etcd controller choose to ignore it. https://github.com/k3s-io/k3s/blob/4adcdf8db32a4be6569c65ede782526c102a0e52/pkg/etcd/etcd.go#L1115-L1117 This could be dangerous. Suppose there is a 3-controlplane nodes k3s cluster with embeded etcd, and 1 ghost member which may already offline (may caused by unclean node removal), then the cluster is actually in unhealthy state and cannot bear another cp node offline.

We are working on k3s cluster-api provider, when we do rollout update on k3s cluster, we encountered issue that node not removed cleanly and have etcd member left, and later it causes quorum loss in the middle of rollout.

Although it can be solved by leveraging annotation etcd.k3s.cattle.io/remove, it's better to have a 2nd guard to ensure no ghost etcd member eixists before doing any cluster managment operation (capi default controlplane implementation have this check). See previous discussion https://github.com/k3s-io/k3s/discussions/9841

Steps To Reproduce:

  1. create 3 controlplane node k3s cluster
  2. shutdown 1 controlplane machine and delete that node by kubectl delete node

Expected behavior: node EtcdIsVoter condition shall be false with message showing the ghost member

Actual behavior: node EtcdIsVoter condition shows as true

Proposed fix PullRequest https://github.com/k3s-io/k3s/pull/11232

Additional context / logs:

brandond commented 1 week ago

It is normal for a node to be added to etcd before the kubelet comes up and creates a Kubernetes node object. Are you accounting for this in your error state? This should be handled by member promotion, and on the other side, the finalizer on node objects should prevent nodes from being deleted without being removed from etcd.

Can you provide steps to reproduce this?

mogliang commented 1 week ago

It is normal for a node to be added to etcd before the kubelet comes up and creates a Kubernetes node object. Are you accounting for this in your error state?

Good point, CAPI may consider this as abonormal state, and hold any mgmt operation until cluster to be stable (all etcd member mapped to node)

This should be handled by member promotion, and on the other side, the finalizer on node objects should prevent nodes from being deleted without being removed from etcd. Can you provide steps to reproduce this?

It seems that the issue we were facing is due to stop VM before removing node, not etcd member leak. When we remove etcd first, and then stop VM and delete node, rollout seems to be ok.

Well, we do hope there is way for CAPI to know the etcd member list as another security check, but currently etcd controller doesn't surface such info.

Another idea is to expose the etcd member list as a node annotation, what do you think @brandond ?

brandond commented 1 week ago

It seems that the issue we were facing is due to stop VM before removing node, not etcd member leak. When we remove etcd first, and then stop VM and delete node, rollout seems to be ok.

You should be able to delete the node from the cluster (both the kubernetes node, and the etcd member) even when the node is offline. It may take a bit longer if you're waiting for the etcd controller lease to transition to another node to trigger the cluster member remove and cleanup of the finalizer on the node object... but it should succeed. Can you provide steps to reproduce whatever problem you're running into here?

Another idea is to expose the etcd member list as a node annotation

I'm not sure this makes sense to me. You'd have global etcd cluster state, stored as an annotation on individual node objects? If for some reason you need to check etcd cluster membership, I'd probably recommend just querying that directly from etcd (if you must), or via the supervisor API - which is available anonymously if accessed via localhost: https://github.com/k3s-io/k3s/blob/917761ce54524b2156fd31ae33350ff19080b981/pkg/etcd/etcd.go#L701-L703

root@systemd-node-1:/# curl -kS https://127.0.0.1:6443/db/info
{"members":[{"ID":4682922252190157995,"name":"systemd-node-1-103abf0d","peerURLs":["https://172.17.0.4:2380"],"clientURLs":["https://172.17.0.4:2379"]}]}
mogliang commented 1 week ago

You should be able to delete the node from the cluster (both the kubernetes node, and the etcd member) even when the node is offline. It may take a bit longer if you're waiting for the etcd controller lease to transition to another node to trigger the cluster member remove and cleanup of the finalizer on the node object... but it should succeed. Can you provide steps to reproduce whatever problem you're running into here?

steps:

  1. create a 2 cp nodes cluster.
  2. stop 1 VM
  3. delete node

problem is: If stop VM first, then cluster will lose quorum directly, and there is no chance for deleting node. That's the reason why we need delete etcd member first before stop VM. But if there is ghost member exists for some reason, then delete etcd member first may not be safe as well. However, i'm not able to reproduce the condtion which has ghost member now.

I'm not sure this makes sense to me. You'd have global etcd cluster state, stored as an annotation on individual node objects? If for some reason you need to check etcd cluster membership, I'd probably recommend just querying that directly from etcd (if you must), or via the supervisor API - which is available anonymously if accessed via localhost:

Indeed, that seems not quite right to surface etcd cluster level info on node. Access etcd directly is the solution we take right now for capi k3s provider, see ref https://github.com/k3s-io/cluster-api-k3s/issues/75, but this is also not an ideal way. it requires manage etcd ca cert on mgmt cluster, and creating etcd proxy pod on target cluster, which is too heavy, besides, it bypasses k3s etcd controller to manage etcd cluster, that seems not quite right from design perspective. Well, if etcd controller can expose etcd membership info, we shall be able to avoid the etcd direct access from capi mgmt cluster.

brandond commented 1 week ago

create a 2 cp nodes cluster

You should never have a cluster with 2 etcd nodes. As you noted, loss of either node will cause a total cluster outage, and you'll need to reset the etcd cluster to a single node to recover. This is what the --cluster-reset flag is for.

that's the reason why we need delete etcd member first before stop VM.

You can either delete the node object, or add the pre-delete annotation to the node to remove it from the etcd cluster without deleting the node resource itself. This annotation is how Rancher prepares nodes for deletion.

mogliang commented 6 days ago

You should never have a cluster with 2 etcd nodes. As you noted, loss of either node will cause a total cluster outage, and you'll need to reset the etcd cluster to a single node to recover. This is what the --cluster-reset flag is for.

2 cp node cluster is the intermediate state when we test scale down from N cp nodes to 1 by capi.

You can either delete the node object, or add the pre-delete annotation to the node to remove it from the etcd cluster without deleting the node resource itself. This annotation is how Rancher prepares nodes for deletion.

Thanks, we are using etcd.k3s.cattle.io/remove annotation. What else does pre-deletion do than removing etcd member?

brandond commented 6 days ago

That's all it does.

If you're scaling down to 1, you might consider just running a cluster-reset instead of stepping through all the intermediate node counts one by one.