Closed mogliang closed 7 months ago
There is an open PR for the RKE2 provider to manage the ETCD membership, which is relevant to this: https://github.com/rancher-sandbox/cluster-api-provider-rke2/pull/265
There is an open PR for the RKE2 provider to manage the ETCD membership, which is relevant to this: rancher-sandbox/cluster-api-provider-rke2#265
Thanks Richard, although RKE2 inherits from K3s, they hosts etcd in different ways. RKE2 is more like k8s, hosting etcd by static pod, the PR you mentioned is using the same way as how kubeadm cp provider manages etcd.
Well, k3s combine embeded etcd as part of k3s host process, besides, k3s itself has controllers to manage embedded etcd. guys from k3s also suggest not operate on etcd directly. So, i'm proposing leverage k3s etcd controller to manage etcd.
I've created a branch locally to implement the etcd management by following this doc, and it's working fine. @richardcase should we combine the implementation code in this PR as well? Or put it in a separate PR?
I've created a branch locally to implement the etcd management by following this doc, and it's working fine. @richardcase should we combine the implementation code in this PR as well? Or put it in a separate PR?
Thanks @mogliang . I'd keep this PR for the doc and have a separate PR for the implementation. I will make sure i review the proposal today.
And great that you have it working :tada:
I did some more investigation to find if etcd proxy could be replaced by k3s etcd controller.
For kubeadm CAPI, it has 2 health checks for monitoring etcd health per etcd node:
Check 1: If the list of members IDs reported by this etcd member is the same as all other members.
Check 2: If an etcd member has any alarm
EtcdIsVoter
(code for reporting it),We need this annotation, otherwise scaling down from 2 nodes to 1 nodes is failing(#96).
For kubeadm CAPI, it will iterate over all etcd members and find members that do not have corresponding nodes. If such member is found, it will get removed from etcd member list.
We also need to reconcile etcd members to prevent losing quorum when deleting a node. But this is not supported by k3s etcd controller and we need to modfiy k3s code.
If we need to remove etcd proxy, and rely on k3s etcd controller, we need to implement Case 1 check 1 and Case 3 in k3s code. We need more discussion if the change is needed. For now, we could simply implement Case 2 to fix #96.
Thanks @nasusoba for the detailed investigation. So, let's keep the etcd proxy way to implement etcd feature.
We may also need work with k3s to fix the gap, it's a better way to leverage k3s etcd controller for the long run.
Initially, we discussed possible solution to managing etcd, and the conclusion is to create etcd proxy pod and then reuse kubeadm code to manage etcd. https://github.com/k3s-io/cluster-api-k3s/issues/75
Recently, we discussed with k3s guys. https://github.com/k3s-io/k3s/discussions/9818 https://github.com/k3s-io/k3s/discussions/9841 They mentioned there is k3s embedded controller living inside k3s process and manage etcd lifecycle, and it also exposes interfaces to allow us to interact, I think this may be a better direction we can choose. Here i drafted the design doc.
Please help reivew and comment~