kubernetes-sigs / cluster-api

Home for Cluster API, a subproject of sig-cluster-lifecycle
https://cluster-api.sigs.k8s.io
Apache License 2.0
3.54k stars 1.3k forks source link

Our finalizers are not domain-qualified, and therefore do not conform to requirements #10914

Open dlipovetsky opened 3 months ago

dlipovetsky commented 3 months ago

What steps did you take and what happened?

Identifiers of custom finalizers consist of a domain name, a forward slash and the name of the finalizer. -- https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#finalizers

Our finalizers are not domain-qualified.

For example, our Cluster finalizer is cluster.cluster.x-k8s.io. To make it domain-qualified, we could change it cluster.x-k8s.io/cluster.

What did you expect to happen?

Cluster API version

v1.7.3

Kubernetes version

Server Version: v1.29.6

Anything else you would like to add?

After https://github.com/kubernetes/kubernetes/issues/119445 merged, and released in v1.29.0, the API server began to return a warning whenever the client creates a Custom Resource with a finalizer that is not domain-qualified.

Cluster API users who use Kubernetes v1.29.0 or newer can expect to find these warnings when our controllers add finalizers to resources, and when we execute clusterctl move, which creates Custom Resources with finalizers on the target cluster.

Examples of warnings

clusterctl move ``` > ./clusterctl move --kubeconfig=src.kubecconfig --to-kubeconfig=dst.kubeconfig Performing move... Discovering Cluster API objects Moving Cluster API objects Clusters=1 Moving Cluster API objects ClusterClasses=1 Waiting for all resources to be ready to move Creating objects in the target cluster [KubeAPIWarningLogger] metadata.finalizers: "addons.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "addons.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "addons.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "addons.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "cluster.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "dockercluster.infrastructure.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "kubeadm.controlplane.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflictswith other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "machine.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "machine.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "dockermachine.infrastructure.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "dockermachine.infrastructure.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers Deleting objects from the source cluster ```
capi-controller-manager ``` > kubectl logs -n capi-system capi-controller-manager-6db447f75f-kp94j | grep -i warning I0719 17:41:28.440401 1 warning_handler.go:65] "metadata.finalizers: \"cluster.cluster.x-k8s.io\": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers" logger="KubeAPIWarningLogger" I0719 17:41:28.546145 1 warning_handler.go:65] "metadata.finalizers: \"addons.cluster.x-k8s.io\": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers" logger="KubeAPIWarningLogger" I0719 17:41:28.775619 1 warning_handler.go:65] "metadata.finalizers: \"machine.cluster.x-k8s.io\": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers" logger="KubeAPIWarningLogger" ```

Label(s) to be applied

/kind bug One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.

k8s-ci-robot commented 3 months ago

This issue is currently awaiting triage.

If CAPI contributors determine this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
vincepri commented 3 months ago

This is a breaking change, probably needs a new api version.

/kind api

k8s-ci-robot commented 3 months ago

@vincepri: The label(s) kind/api cannot be applied, because the repository doesn't have them.

In response to [this](https://github.com/kubernetes-sigs/cluster-api/issues/10914#issuecomment-2241193561): >This is a breaking change, probably needs a new api version. > >/kind api Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
vincepri commented 3 months ago

/area api

sbueringer commented 2 months ago

Wondering what the deprecation story around finalizers looks like (in general). Is there prior art?

One main question probably: How are CAPI users using our finalizers?

sbueringer commented 2 months ago

cc @JoelSpeed

neolit123 commented 2 months ago

Wondering what the deprecation story around finalizers looks like (in general). Is there prior art?

seems to me there should be an open conversation with the upstream to establish the deprecation story recommendation for anyone. i.e. participants / OWNERS around https://github.com/kubernetes/kubernetes/issues/119445.

as far as i can tell there would be a breaking change for any user assuming / using a finalizer with an exact name.

JoelSpeed commented 2 months ago

I'd suggest we start by introducing and using both old and new versions of the finalizers for a release, before removing them in a subsequent release. It wouldn't be a breaking change to add the new finalizer versions to the current release, we would then have to recommend folks to upgrade via whatever the next Y stream release would be, before they upgrade to the next API version.

This would give some period where both finalizers are present, in which they can migrate any tooling they already have.

On down conversion, I guess we could look at restoring the old finalizer, though I haven't personally mutated objectmeta during a conversion before, so I can't say how easy or hard that would be.

Do we currently have plans for timelines around a new API version?

neolit123 commented 2 months ago

isn't the presence of both new and old finalizers resulting in having more stages where the users is broken compared to having a single stage where the finializer move from old to new?

sbueringer commented 2 months ago

How is the user broken if our controllers are adding & removing more finalizers?

To answer my own question. I guess they are broken if they rely on the exact existence (or rather absence) of finalizers?

I guess relying on this only makes sense if they try to implement some sort of ordering (maybe?) around finalizers. Which feels a bit like an anti-pattern?

JoelSpeed commented 2 months ago

I guess relying on this only makes sense if they try to implement some sort of ordering (maybe?) around finalizers. Which feels a bit like an anti-pattern?

This has always been explicitly called out as an anti-pattern, and even if they were doing this, having both finalizers shouldn't cause issue right? They would be looking for the presence of the old version, and block their logic on this, having additional finalizers shouldn't be an issue unless they were waiting for there to be precisely their own and the CAPI finalizer, but that is so easily broken I'd be suprised if anyone was doing that

isn't the presence of both new and old finalizers resulting in having more stages where the users is broken compared to having a single stage where the finializer move from old to new?

Since both would be added and removed from each object at the same time, I don't think it would break users, it would just mean there's an additional value you have that you could rely on.

However, the point about people not relying on other's finalizers is a fair one. Are finalizers considered an implementation detail, would it really be a breaking change to change the value if each controller has the correct logic to migrate from old to new? 🤔

sbueringer commented 2 months ago

Let's just say, this is probably the first time we worry about introducing a new finalizer. We have been introducing a lot within v1beta1. I'm also a bit hesitant to consider the existence/absence of a finalizer an API.

dlipovetsky commented 2 months ago

Are finalizers considered an implementation detail, would it really be a breaking change to change the value if each controller has the correct logic to migrate from old to new? 🤔

I think it's fair to say they are an implementation detail. However, I don't see any downside to the conservative approach you proposed:

I'd suggest we start by introducing and using both old and new versions of the finalizers for a release, before removing them in a subsequent release

guettli commented 2 months ago

This is a lot of effort and only a little benefit. Maybe we could update the upstream code which generates the error message? I don't think the current name (without a slash) is a real problem.

chrischdi commented 2 months ago

The error comes from the kube-apiserver and is intended to be verbose, see the linked issue:

I don't think the format will get changed on upstream side.

sbueringer commented 2 months ago

I'm also not looking forward to this work, but we probably don't have another choice.

Related Slack thread: https://kubernetes.slack.com/archives/C0EG7JC6T/p1721923881095799

fabriziopandini commented 1 month ago

Linking a comment about a proposed convention for new finalizers https://github.com/kubernetes-sigs/cluster-api/pull/10791#discussion_r1732904949

dlipovetsky commented 2 weeks ago

In the meantime, we are hiding "finalizer name" warnings for finalizers for the cluster.x-k8s.io group by default in:

When we fix our finalizers, we can remove this warning handler.