The replicaset controller won't delete pods anymore which prevents the rolling update from proceeding
The controller won't block the creation of new pods, so if new nodes appear in the meantime the controller will create new pods on the new nodes even when paused
Pausing a rolling update when the ERS is deployed for the first time in the cluster will not have any effect (just like canary)
The metric eds_status_rolling_update_paused is set to 1
Support freezing rollouts.
kubectl plugin commands:
k eds freeze-rollout <eds name>
k eds unfreeze-rollout <eds name>
When frozen:
The replicaset controller won't delete nor create new pods
The controller will block the creation of new pods, so if new nodes appear in the meantime the controller will not create new pods
Freezing a rollout will take effect even if the ERS is deployed for the first time in the cluster
The metric eds_status_rollout_frozen is set to 1
Motivation
More flexibility and better control of the EDS when deploying new changes
Additional Notes
Make sure you build the kubectl plugin locally during QA make kubectl-eds -> ./bin/kubectl-eds
Describe your test plan
Make sure the described behaviour above is respected
Make sure the EDS state is updated accordingly in each case
Make sure the ERS conditions are updated accordingly in each case
Make sure the metrics are updated accordingly
Make sure the kubectl plugin commands don't succeed when in canary phase
Try edge cases like freezing and pausing at the same time, make sure the freeze wins
What does this PR do?
Support pausing rolling updates.
k eds pause-rolling-update <eds name>
k eds unpause-rolling-update <eds name>
eds_status_rolling_update_paused
is set to1
Support freezing rollouts.
k eds freeze-rollout <eds name>
k eds unfreeze-rollout <eds name>
eds_status_rollout_frozen
is set to1
Motivation
More flexibility and better control of the EDS when deploying new changes
Additional Notes
Make sure you build the kubectl plugin locally during QA
make kubectl-eds
->./bin/kubectl-eds
Describe your test plan