Open com6056 opened 3 days ago
For example, a simple readiness probe to ensure pods only roll when it is safe to do so:
#!/bin/sh
state=$(redis-cli cluster info | grep cluster_state | cut -d: -f2 | tr -d '[:space:]')
if [ "$state" != ok ]; then
echo "FAIL: Cluster state is $state"
exit 1
fi
slots_assigned=$(redis-cli cluster info | grep cluster_slots_assigned | cut -d: -f2 | tr -d '[:space:]')
slots_ok=$(redis-cli cluster info | grep cluster_slots_ok | cut -d: -f2 | tr -d '[:space:]')
if [ "$slots_assigned" != "$slots_ok" ]; then
echo "FAIL: Not all assigned cluster slots are ok"
exit 1
fi
But this will fail until the cluster is actually created and it will fail on nodes being added/removed from the cluster, making it so the operator halts any progress.
Currently, we utilize this state from https://github.com/OT-CONTAINER-KIT/redis-operator/blob/08eb5eb2364aaa77f62b4b2740d17863c956deef/api/status/redis-cluster_status.go#L17 to indicate whether the cluster is prepared. Do you find this sufficient for your use case? @com6056
Unfortunately without a readiness probe, the StatefulSet has no protections against rolling the pods too fast, so we also need readiness probes (unless the operator can somehow add a feature to control how fast the pods roll in a RollingUpdate)
I think https://github.com/OT-CONTAINER-KIT/redis-operator/issues/923 is somewhat related, although that talks about solving this in the operator itself versus just properly allowing something like a readiness probe to be used
There is also another issue that occurs when you don't use readiness probes, scaling up the cluster causes requests to be sent to the new leader/follower pods even before they have joined the cluster, causing request errors.
It seems like we check the ready state of all of the replicas here: https://github.com/OT-CONTAINER-KIT/redis-operator/blob/08eb5eb2364aaa77f62b4b2740d17863c956deef/k8sutils/statefulset.go#L77
When you use a custom readiness probe though that checks cluster status, the operator can get into a stuck state during bootstrapping and scale in/out since the readiness probe will be failing until the operator actually takes action to create the cluster or add/remove nodes from the cluster.
Is there any way we can base the operator off of the liveness probes, or maybe only take into account readiness when we expect the cluster to be in a fully ready state?