Closed ngonemettle closed 1 year ago
I have not seen this reproduced anywhere and there are things in the logs that make me think it some kyverno policy or something you have configured blocking the update can you confirm that there is not policy in place that would affect pod updates?
We have kyverno installed on our cluster but we don't have policies blocking pod updates. We do have ones adding labels.
I think I just tracked this down. We have policies that change Pods, which has so far always been fine. What's happened here is we have a policy doing a jsonpatch, which unlike a strategic merge patch doesn't inherently become a no-op when it's repeated. We haven't had any issues elsewhere, because nothing updates Pods, only creates/deletes them. argo-rollouts mutates existing Pods here, but is aiming to touch only the metadata.. which would be fine, but triggers the policy to apply again.
Though my initial attempt to fix this by setting
preconditions:
all:
- key: "{{request.operation || 'BACKGROUND'}}"
operator: Equals
value: CREATE
doesn't seem to have worked, so I'm still a bit confused
We have now fixed the policy the kyverno policy, Argo-rollout is working fine. I'm closing the issue. thank you for your support.
Awesome glad it is working for you
Checklist:
Describe the bug After following all steps for a canary version to be promoted stable, argo rollouts still keep pods from the older stable version.
To Reproduce Here are the rollout, virtual service and destination rule manifest applied:
Expected behavior After a complete promotion of a canary release only keep pods from this version remove old pods from old revisions.
Version We see this issue on the argo-rollout helm chart version 2.22.1 and also version 2.28.0
Logs from argo rollout controller
Logs from eks
Message from the maintainers:
Impacted by this bug? Give it a 👍. We prioritize the issues with the most 👍.