Open MaxRink opened 3 months ago
This issue is currently awaiting triage.
If CAPI Operator contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
What steps did you take and what happened: If i have an existing cluster bootstrapped by clusterctl and i then migrate to the cluster-api-operator the version statuis never gets correctly reflected in clusterctl
What did you expect to happen: Cluster-api-operator keeps the versions in sync so that clusterctl still shows the installed versions correcntly
Anything else you would like to add:
Environment:
kubectl version
): 1.30/etc/os-release
):/kind bug [One or more /area label. See https://github.com/kubernetes-sigs/cluster-api-operator/labels?q=area for the list of labels]