Open dlipovetsky opened 4 months ago
This issue is currently awaiting triage.
If CAPI contributors determine this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
This is a breaking change, probably needs a new api version.
/kind api
@vincepri: The label(s) kind/api
cannot be applied, because the repository doesn't have them.
/area api
Wondering what the deprecation story around finalizers looks like (in general). Is there prior art?
One main question probably: How are CAPI users using our finalizers?
cc @JoelSpeed
Wondering what the deprecation story around finalizers looks like (in general). Is there prior art?
seems to me there should be an open conversation with the upstream to establish the deprecation story recommendation for anyone. i.e. participants / OWNERS around https://github.com/kubernetes/kubernetes/issues/119445.
as far as i can tell there would be a breaking change for any user assuming / using a finalizer with an exact name.
I'd suggest we start by introducing and using both old and new versions of the finalizers for a release, before removing them in a subsequent release. It wouldn't be a breaking change to add the new finalizer versions to the current release, we would then have to recommend folks to upgrade via whatever the next Y stream release would be, before they upgrade to the next API version.
This would give some period where both finalizers are present, in which they can migrate any tooling they already have.
On down conversion, I guess we could look at restoring the old finalizer, though I haven't personally mutated objectmeta during a conversion before, so I can't say how easy or hard that would be.
Do we currently have plans for timelines around a new API version?
isn't the presence of both new and old finalizers resulting in having more stages where the users is broken compared to having a single stage where the finializer move from old to new?
How is the user broken if our controllers are adding & removing more finalizers?
To answer my own question. I guess they are broken if they rely on the exact existence (or rather absence) of finalizers?
I guess relying on this only makes sense if they try to implement some sort of ordering (maybe?) around finalizers. Which feels a bit like an anti-pattern?
I guess relying on this only makes sense if they try to implement some sort of ordering (maybe?) around finalizers. Which feels a bit like an anti-pattern?
This has always been explicitly called out as an anti-pattern, and even if they were doing this, having both finalizers shouldn't cause issue right? They would be looking for the presence of the old version, and block their logic on this, having additional finalizers shouldn't be an issue unless they were waiting for there to be precisely their own and the CAPI finalizer, but that is so easily broken I'd be suprised if anyone was doing that
isn't the presence of both new and old finalizers resulting in having more stages where the users is broken compared to having a single stage where the finializer move from old to new?
Since both would be added and removed from each object at the same time, I don't think it would break users, it would just mean there's an additional value you have that you could rely on.
However, the point about people not relying on other's finalizers is a fair one. Are finalizers considered an implementation detail, would it really be a breaking change to change the value if each controller has the correct logic to migrate from old to new? 🤔
Let's just say, this is probably the first time we worry about introducing a new finalizer. We have been introducing a lot within v1beta1. I'm also a bit hesitant to consider the existence/absence of a finalizer an API.
Are finalizers considered an implementation detail, would it really be a breaking change to change the value if each controller has the correct logic to migrate from old to new? 🤔
I think it's fair to say they are an implementation detail. However, I don't see any downside to the conservative approach you proposed:
I'd suggest we start by introducing and using both old and new versions of the finalizers for a release, before removing them in a subsequent release
This is a lot of effort and only a little benefit. Maybe we could update the upstream code which generates the error message? I don't think the current name (without a slash) is a real problem.
The error comes from the kube-apiserver and is intended to be verbose, see the linked issue:
I don't think the format will get changed on upstream side.
I'm also not looking forward to this work, but we probably don't have another choice.
Related Slack thread: https://kubernetes.slack.com/archives/C0EG7JC6T/p1721923881095799
Linking a comment about a proposed convention for new finalizers https://github.com/kubernetes-sigs/cluster-api/pull/10791#discussion_r1732904949
In the meantime, we are hiding "finalizer name" warnings for finalizers for the cluster.x-k8s.io
group by default in:
move
command (https://github.com/kubernetes-sigs/cluster-api/pull/11129)When we fix our finalizers, we can remove this warning handler.
What steps did you take and what happened?
Our finalizers are not domain-qualified.
For example, our Cluster finalizer is
cluster.cluster.x-k8s.io
. To make it domain-qualified, we could change itcluster.x-k8s.io/cluster
.What did you expect to happen?
Cluster API version
v1.7.3
Kubernetes version
Server Version: v1.29.6
Anything else you would like to add?
After https://github.com/kubernetes/kubernetes/issues/119445 merged, and released in v1.29.0, the API server began to return a warning whenever the client creates a Custom Resource with a finalizer that is not domain-qualified.
Cluster API users who use Kubernetes v1.29.0 or newer can expect to find these warnings when our controllers add finalizers to resources, and when we execute
clusterctl move
, which creates Custom Resources with finalizers on the target cluster.Examples of warnings
clusterctl move
``` > ./clusterctl move --kubeconfig=src.kubecconfig --to-kubeconfig=dst.kubeconfig Performing move... Discovering Cluster API objects Moving Cluster API objects Clusters=1 Moving Cluster API objects ClusterClasses=1 Waiting for all resources to be ready to move Creating objects in the target cluster [KubeAPIWarningLogger] metadata.finalizers: "addons.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "addons.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "addons.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "addons.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "cluster.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "dockercluster.infrastructure.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "kubeadm.controlplane.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflictswith other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "machine.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "machine.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "dockermachine.infrastructure.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "dockermachine.infrastructure.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers Deleting objects from the source cluster ```capi-controller-manager
``` > kubectl logs -n capi-system capi-controller-manager-6db447f75f-kp94j | grep -i warning I0719 17:41:28.440401 1 warning_handler.go:65] "metadata.finalizers: \"cluster.cluster.x-k8s.io\": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers" logger="KubeAPIWarningLogger" I0719 17:41:28.546145 1 warning_handler.go:65] "metadata.finalizers: \"addons.cluster.x-k8s.io\": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers" logger="KubeAPIWarningLogger" I0719 17:41:28.775619 1 warning_handler.go:65] "metadata.finalizers: \"machine.cluster.x-k8s.io\": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers" logger="KubeAPIWarningLogger" ```Label(s) to be applied
/kind bug One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.