doitintl / kube-no-trouble

Easily check your clusters for use of deprecated APIs
MIT License
3.11k stars 158 forks source link

Kubent cause AWS EKS upgrade warning #584

Open rgarrigue opened 5 months ago

rgarrigue commented 5 months ago

Hello,

I discovered kubent recently & used it to check some EKS clusters before upgrading from 1.27 => 1.28 => 1.29. Everything went fine for the 1.27 : I had nothing deprecated, then EKS upgrade went fine.

Now I'm dealing with my latest cluster, whose version is 1.28 to begin with. Ran kubent too, no trouble. But when I want to hit the EKS upgrade button, got a warning about deprecated usage, and turns out it's kubent usage itself

image image

arushdesp commented 5 months ago

Hi there, it's the same for me: If you run the command: kubectl get --raw /apis/flowcontrol.apiserver.k8s.io/ You will see that the deprecated version is inside the cluster because it will be deprecated later but the preferred version is the new one v1beta3 so it's not really a problem but I also dont like the error in the console. I tried other tools as well and I got the same error so apparently the call to that api is done nonetheless.

dark0dave commented 4 months ago

Actually this is more complicated than it seems. Arg. Basically I can't update the go-client. Without losing the ablilty to query old resources.

Once I go past version 1.28+ -> 1.29, I can't get old resources. Hence the issue you are seeing. The k8s go client is old.

This is a bit of tricky issue to fix, as I'd like to have some backwards compat to older clusters.

m-franke-tqgg commented 4 months ago

Are you sure about the kubent being the deprecation warning culprit?

I have three EKS clusters, running v1.28, and only one with the flowcontrol check in red while the other two stay green and I checked all three of them with kubent -t 1.29.

And interestingly my kubent, latest release by homebrew, does not show any deprecation warnings for all my clusters and the flowcontrol API endpoint.

Just ran a test and executed the check against my "red" cluster 10 times in a loop and look at the stats:

grafik

The number of requests was 15 before and look at the last request time. Quite some time since April 26th.

Just a thought, but could it be, that the EKS Insights tooling uses kubent under the hood themself and this causes or used to cause these errors?

mattburgess commented 2 months ago

This sounds very similar to #525 - in that ticket then GKE's equivalent of EKS' Upgrade Insights shows the same calls to deprecated/removed APIs.

@dark0dave I appreciate the desire to maintain backward compatibility but I wonder if that's actually a hard blocker here? What I'm thinking of is that in this specific case, as a cluster operator, I'd be more than happy to install kubent- and keep upgrading that as and when my clusters get upgraded.

So, for example, being on k8s-1.26.x, I'd install kubent-1.26.0 (or any subsequent patch version thereof). That version of kubent would be compiled against the 1.26.x version of the go-client library. Sure, it wouldn't be able to query APIs removed in k8s <= 1.25.x but as a cluster operator those are already broken in my clusters at this point anyway. As one is unable to upgrade apiservers more than one minor release at a time, I think keeping kubent updated isn't an unreasonable demand on operators.

What I don't know is how burdensome this kind of churn would be on you as a kubent maintainer though.

github-actions[bot] commented 3 weeks ago

This issue has not seen any activity in last 60 days, and has been marked as stale.