aenix-io / etcd-operator

New generation community-driven etcd-operator!
https://etcd.aenix.io
Apache License 2.0
88 stars 13 forks source link

Design and Implement a Cluster Scale Up/Down Mechanism #58

Open kvaps opened 7 months ago

kvaps commented 7 months ago

We need to design a mechanism for scaling a cluster up and down.

When a user modifies spec.replicas, the cluster should scale to the required number of replicas accordingly. Currently, we are utilizing a StatefulSet, but we understand that we might have to move away from it in favor of a custom pod controller.

Scaling up should work out of the box, but scaling down might be more complex due to several considerations:

We're open to suggestions on how to address these challenges and implement an efficient and reliable scaling mechanism.

kvaps commented 7 months ago

Another idea to consider is if a user manually recreate a replica (by deleting the pod and the PVC). In such cases we need to verify within the cluster that the old replica no longer exists.

sircthulhu commented 7 months ago

Cluster rescaling proposal

etcd operator should be able to scale cluster up and down and react to pod deletion or PVC deletion.

Scaling procedure

There should be fields status.replicas and status.instanceNames in order to understand what instances are members, which of them should become members and which of them should be removed.

We should introduce new status condition Rescaling that will be False if everything is fine and True if cluster currently is rescaling or fixing, for example when pod (in case of emptyDir) or PVC is deleted.

Cluster state configmap should contain ETCD_INITIAL_CLUSTER only from the list of status.instanceNames as they're healthy cluster members.

Status reconciliation

Field status.replicas should be filled on reconciliation based on current number of ready replicas if cluster is not in rescaling state. Firstly, it's filled when cluster is bootstrapped.

Field status.instanceNames should be filled on reconciliation based on current ready replicas if cluster is not in rescaling state.

Scaling up

When spec.replicas > status.replicas operator should scale cluster up.

Process is the following:

  1. Check that cluster currently has quorum. If not, exit reconciliation loop and wait until it becomes healthy
  2. Provided that cluster has quorum, it is safe to perform scaling up.
  3. Update StatefulSet in accordance with spec.replicas
  4. Update EtcdCluster status condition, set Rescaling to True with Reason: ScalingClusterUp
  5. Execute etcdctl member add for each new member
  6. Wait until StatefulSet becomes Ready
  7. Update status.replicas and status.instanceNames in accordance with spec.replicas and current pod names
  8. Update EtcdCluster status condition, set Rescaling to False with Reason: ReplicasMatchSpec
  9. Then update cluster state ConfigMap ETCD_INITIAL_CLUSTER according to status.instanceNames

In case of errors, EtcdCluster will be stuck on Recaling stage without damaging cluster.

If user cancellation (by updating EtcdCluster's spec.replicas to old value), StatefulSet spec.replicas should be reverted back and status condition for Rescaling should be set to False.

If user sets spec.replicas < status.replicas to both cancel scaling up and perform scaling down, we should update StatefulSet's spec.replicas to status.replicas of CR and set Rescaling to False and schedule new reconciliation.

Scaling down

When spec.replicas < status.replicas operator should scale cluster down.

Process is the following:

  1. Check that cluster currently has quorum. If not, exit reconciliation loop and wait until it becomes healthy. Scaling down is not possible as changes to memberlist must be agreed by quorum.
  2. Provided that cluster has quorum, it is safe to perform scaling down.
  3. Operation should on a per-pod basis. Only one pod can be safely deleted at once.
  4. Calculate last pod name as idx=status.replicas - 1 -> crdName-$(idx)
  5. Update EtcdCluster status condition to Rescaling, status True and Reason: ScalingClusterDown
  6. Update StatefulSet's spec.replicas to spec.replicas - 1
  7. Connect to etcd cluster using Service as root and run send command like etcdctl member remove crdName-$(idx).\n Running this command with an alive pod should be safe as pod should be already sent the SIGTERM signal by kubelet.
  8. Update EtcdCluster status condition, set Rescaling to False with Reason: ReplicasMatchSpec
  9. If spec.replicas < status.replicas, reschedule reconcile to run this algorithm from the beginning