Closed dlipovetsky closed 1 month ago
/area clusterctl
Some implementation notes for later:
We instantiate the default Proxy implementation in https://github.com/kubernetes-sigs/cluster-api/blob/85d2880ef3a2887bc6313682daba9be151a2474c/cmd/clusterctl/client/cluster/client.go#L203
We have a chance to pass some options at this time.
However, we also instantiate the default Proxy implementation in https://github.com/kubernetes-sigs/cluster-api/blob/99866da16d2cb24cc7fda10884cc21f79ef741b2/cmd/clusterctl/client/cluster/ownergraph.go#L71
And we don't currently pass accept options here. We don't even use the InjectProxy pattern we established in the client package.
I didn't look into implementation details, but agree with the goal
/triage accepted
@dlipovetsky You saw this only during clusterctl move until now?
Okay also saw it in controller logs. Not sure if I like it :)
@dlipovetsky Do you know if there are any discussions about rolling back this warning?
It will be very very nosiy everywhere and almost everyone won't be able to just change the name of the finalizer (i.e. they can't do anything about it, for years usually, I think)
@dlipovetsky Do you know if there are any discussions about rolling back this warning?
That's a great question. I'll look around and report back.
Started a thread: https://kubernetes.slack.com/archives/C0EG7JC6T/p1721923881095799
@dlipovetsky Will definitely review your PR in controller-runtime tomorrow, so that we get the ability to suppress warnings ASAP
It turned out that controller-runtime, when configured to suppress warnings, was not doing so. That's now fixed, and should be in the next controller-runtime release.
I realize that opinions can reasonably differ on whether to suppress warnings, or even which warnings to suppress. Today, controller-runtime is not that flexible; it can either suppress, or surface warnings.
Yeah not sure what exactly to do, making it configurable in general in the clusterctl library is definitely okay (TBD how exactly)
or even which warnings to suppress
We can make this configurable in the library with your follow-up PR in CR, right?
We can make this configurable in the library with your follow-up PR in CR, right?
Yes. I just filed https://github.com/kubernetes-sigs/controller-runtime/pull/2896. With this change, clusterctl itself can define config.WarningHandler, or, for more flexibility, allow the clusterctl user to define it.
@dlipovetsky Thx for opening the CR PRs. Now that they are merged, how do you want to handle this in clusterctl?
I think the final implementation will be only availalbe in CR v0.19.x (which is too late for CAPI v1.8.x). Do you need something for CAPI v1.8.x? (if yes, we could go with providing an option to surpress all warnings with CAPI v1.8 and then something more fine-granular with 1.9)
Third-party code creates a clusterctl client in two ways. The default client implementation includes a "proxy" implementation. The proxy is what configures the controller-runtime client, which is exclusively used to talk to the Kubernetes API.
flowchart TD
user["third-party code"]
ccn["`cmd/clusterclient/client/cluster/New()`"]
gog["`cmd/clusterclient/client/cluster/GetOwnerGraph()`"]
np["`cmd/clusterclient/client/cluster/newProxy()`"]
nc["`cmd/clusterclient/client/cluster/proxy.NewClient()`"]
lib["`/cmd/clusterclient/client/*`"]
user --> ccn
user --> gog
ccn --> np
gog --> np
lib --> nc
We want third-party code to configure the controller-runtime client, specifically how to handle warnings returned by the Kubernetes API.
The controller-runtime client de-duplicates and surfaces warnings by default. To customize handling, users must implement a rest.WarningHandler.
I think a scalable solution is for clusterctl to implement a rest.WarningHandler. The handler should be configurable. e.g. users may want to suppress some warnings, like those about finalizers, while surfacing others.
We should give the user the opportunity to configure the warning handler when they call cluster.New()
or cluster.GetOwnerGraph()
.
Sounds good to me. So I assume something for capi 1.9 is early enough for you? (as the CR version used in 1.8.x doesn't allow this)
Feedback from @fabriziopandini would be also good once he's back from PTO
Do you need something for CAPI v1.8.x? (if yes, we could go with providing an option to surpress all warnings with CAPI v1.8 and then something more fine-granular with 1.9)
We're suppressing warnings using a very small patch in our downstream fork, and we can carry that patch until the first 1.9 upstream release. However, if there is enough community demand, I could work on a temporary upstream solution for 1.8.x.
+1 to enable an option for 1.9.x (as a feature addition). we can do bugfixes for 1.8.x
@dlipovetsky If you want, we now should be ready to implement something for v1.9 on main
/help
@sbueringer: This request has been marked as needing help from a contributor.
Please ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help
command.
/assign
What would you like to be added (User Story)?
As an developer, I want to suppress API server warnings when I use the clusterctl library.
Detailed Description
As an developer, I want to suppress API server warnings when I use the clusterctl library.
More precisely, I want to suppress warnings when I create the clusterctl client here: https://github.com/kubernetes-sigs/cluster-api/blob/85d2880ef3a2887bc6313682daba9be151a2474c/cmd/clusterctl/client/cluster/client.go#L187-L189
So that, whenever clusterctl creates a "proxy" client, that client is configured to suppress API server warnings.
Clusterctl creates Kubernetes clients in at least 3 places, and it probably makes sense to suppress warnings in all of them.
Anything else you would like to add?
Warnings can appear due to factors outside of my users' control. For example, as of Kubernetes v1.29.0 and newer, users begin seeing new warnings related to finalizer names during every "move" operation. Those warnings will continue to appear until all projects in the Cluster API have finalizers that conform to requirements (see #10914).
clusterctl move output with warnings
``` > ./clusterctl move --kubeconfig=src.kubecconfig --to-kubeconfig=dst.kubeconfig Performing move... Discovering Cluster API objects Moving Cluster API objects Clusters=1 Moving Cluster API objects ClusterClasses=1 Waiting for all resources to be ready to move Creating objects in the target cluster [KubeAPIWarningLogger] metadata.finalizers: "addons.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "addons.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "addons.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "addons.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "cluster.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "dockercluster.infrastructure.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "kubeadm.controlplane.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflictswith other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "machine.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "machine.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "dockermachine.infrastructure.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers [KubeAPIWarningLogger] metadata.finalizers: "dockermachine.infrastructure.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers Deleting objects from the source cluster ```As a developer, if I choose to suppress warnings, I take responsibility for acknowledging, and addressing them, as needed.
We may also consider providing this option in the clusterctl CLI, but I think that's a separate topic.
Label(s) to be applied
/kind feature