IBM / operator-for-redis-cluster

IBM Operator for Redis Cluster
https://ibm.github.io/operator-for-redis-cluster
MIT License
60 stars 34 forks source link

Max surge configuration #111

Open mgrzeszczak opened 12 months ago

mgrzeszczak commented 12 months ago

Is it possible to specify how many new pods can be created at one time? Similar as maxSurge settings for rollingUpdate for k8s deployments.

cin commented 12 months ago

Unfortunately, there is not a way to specify how many pods can be created at a time. The current implementation spins up one pod at a time and doesn't take advantage of a deployment or statefulset. While this simplifies key migration, it does slow things down if you're considerably growing your cluster.

mgrzeszczak commented 12 months ago

I dont see it working as you described. It spins up pod after pod, it doesnt destroy any existing ones. At what point will it start destroying some of the existing pods?

EDIT: lets assume i have a cluster of 3 nodes, with replication factor =2 that gives me 9 pods

how many pods will the operator create during update before it starts deleting anything?

cin commented 12 months ago

If you're growing your cluster, why would it delete pods? Also where did I say anything about how pods are deleted?

mgrzeszczak commented 12 months ago

Ok sorry it appears we misunderstood each other then. Im interested in the case where I change for example cpu/memory limit of my redis pods and apply the change via helm. Because we have limits on the namespace, I need to know how many additional pods will be created during such update

cin commented 12 months ago

That's going to be controlled by the Kubernetes scheduler, not the operator. I would imagine PodDisruptionBudgets will come into play too. I'm trying to dig up some docs on the specifics of altering memory and CPU limits but am not getting too much helpful info. https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/ is about all I've found thus far. I'm definitely curious to learn about your findings though, so please post back with your results. :)