grafana / rollout-operator

Kubernetes Rollout Operator
Apache License 2.0
132 stars 17 forks source link

Guard against race conditions #81

Open jhalterman opened 1 year ago

jhalterman commented 1 year ago

While #79 described a situation that could happen commonly, there are some less likely but still possible race conditions that we might consider guarding against.

Concurrent downscale + rollout

If a downscale is in progress but hasn't updated the last-downscale annotation yet, it's possible that a rollout could start concurrently, bypassing the min-time-between-zones-downscale check in the controller.

Concurrent downscale + downscale

If downscales for two zones occur concurrently, they can both be accepted before either has updated the last-downscale annotation, bypassing the min-time-between-zones-downscale check in the prepare-downscale webhook.

Potential fix

One reliable way to serialize operations across different statefulsets is to lock on a common resource, such as with an annotation on a common CRD, using a resourceVersion on updates. Absent that, we could add a lock annotation to a statefulset and use a double check (check, lock, check) when adding it to guard against races, though I'm not positive this is completely safe.

56quarters commented 1 year ago

Concurrent downscale + downscale It downscales for two zones occur concurrently, they can both be accepted before either has updated the last-downscale annotation, bypassing the min-time-between-zones-downscale check in the prepare-downscale webhook.

This situation isn't possible. Zone A is controlled by HPA and zones B and C are controlled by the rollout-operator. The rollout operator only adjusts replicas in zones B and C down if the number of replicas in zone A has already been adjusted down.

jhalterman commented 1 year ago

I had Loki in mind for that scenario, where they use an HPA for each zone. It may also be possible for Mimir, if the controller is downscaling zone C, and then the zone A HPA decides to downscale again.