wave-k8s / wave

Kubernetes configuration tracking controller
Apache License 2.0
646 stars 82 forks source link

Conflict between default kubernetes rollout #108

Closed messiahUA closed 1 month ago

messiahUA commented 2 years ago

I've tried wave for the first time and looks like there is a possible conflict between default kubernetes rollout when you modify configmap/secret reference. So basically k8s starts its rollout and then wave makes a change and starts another rollout.

I don't think it is critical because I can't come up with a case where it is, but maybe this can cause some unforeseen race conditions. And as far as I understand the logic of wave there is no way around such situations because it needs to update the annotation in any case?

Also, want to add that in my particular case I need to monitor only one secret because configmap updates are managed by another process, so the ideal way is to configure wave to watch only specified resource (secret) and unfortunately there is no such a feature...

wonderhoss commented 2 years ago

Are you referring to a situation where both a ConfigMap are a Deployment are updated roughly at the same time? In those situations it is possible that, depending on ordering, a new Pod will start, still using the old configuration, then immediately gets replaced by one using the new one.

Without Wave this could still be a problem, causing some Pods to run with different configuration from others. Wave merely ensures in these cases that the state everything settles on eventually is the desired one.

Everything in Kubernetes is eventually consistent and so such situations should be expected. Applications should make use of PodDisruptionBudgets and empoy techniques that allow code to function with both old and new versions of a configuration set in order to avoid being negatively impacted.

messiahUA commented 2 years ago

@gargath I'm referring to the situation when configmap reference changes in the deployment. For example you have a deployment which references configmap1 and you change it to configmap2, k8s starts its rollout which is then superseded by wave at the same time. As you mentioned I think this is not a problem and expected by k8s, but just a thought. Maybe ideally it should be handled in the webhook which injects new checksum on edit to prevent double rollout, but probably it's not worth it.

And also I don't want to create another issue for the question about if there are any plans to implement a possibility to monitor only specified configmap(s)/secret(s)?

github-actions[bot] commented 2 years ago

This issue has been inactive for 60 days. If the issue is still relevant please comment to re-activate the issue. If no action is taken within 7 days, the issue will be marked closed.

jabdoa2 commented 2 months ago

We should add an optional mutating webhook to implement this. It would prevent restarts in all cases where configmaps/secrets exist prior to creation of the deployment. In my opinion that would be worth implementing.

Unfortunately, this will not solve the case for most helm charts where you install configmap + deployment at the same time. Especially when you add other controller (such as cert-manager) into the mix. In those cases the pod might actually be pending until those secrets exist and start with the first version but wave will restart them right away.

jabdoa2 commented 1 month ago

Should be fixed by #155

jabdoa2 commented 1 month ago

@toelke this one can be closed. I guess I should have commented in the PR instead of here.