Closed sbueringer closed 1 month ago
This issue is currently awaiting triage.
CAPI contributors will take a look as soon as possible, apply one of the triage/*
labels and provide further guidance.
/kind bug (not a CAPI bug as of now, but something that can lead to CAPI bug if we don't keep it in mind) /priority important-longterm
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/assign
I'll have to find time to verify that the k/k fix addresses our problem in ssa.Patch. Then we would be good for >= Kubernetes 1.31.0
As of Kubernetes 1.31 the issue is fixed. I.e. if we only have to support Kubernetes 1.31 or above this limitation does not apply anymore (although we should probably double check once we want to rely on this)
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
@sbueringer is it ok to close this issue? work in kk has been completed; we are not aware of impacts in CAPI, and there is not much we can do apart from recommending users to use latest kk versions for their management cluster to avoid any residual risk. (and this issue will remain in our GitHub no matter if closed)
Fine for me to close. We should really try to not forget this whenever we touch SSA (like many other issues with SSA :/)
/close
@sbueringer: Closing this issue.
Context:
Some details about the k/k issue: the apiserver bumps the resourceVersion under the following circumstances (even though ideally it shouldn't):
As mentioned above, we are safe today. Or at least we are not aware of any issues. If we expand our usage of SSA we have to keep the current limitations in mind and implement more extensive SSA caching if necessary.
One way to address this is to also leverage SSA dry-run in ssa.Patch like we do in the Cluster topology controller.
Related work to mitigate this issue:
Appendix: Are we affected?
Appendix: Low-level apiserver issue details