kubernetes-sigs / cli-utils

This repo contains binaries that built from libraries in cli-runtime.
Apache License 2.0
155 stars 77 forks source link

update dependencies to k8s.io v0.27.2 #619

Closed SimonTheLeg closed 1 year ago

SimonTheLeg commented 1 year ago

This PR updates the dependencies to match the v0.27.2 version of the k8s.io components. Go mod also automatically updates kyaml to v0.14.1 to keep compatibility.

With v0.27, DryRunVerifier has been deprecated (https://github.com/kubernetes/kubernetes/pull/114294) without any replacement. Therefore this PR also removes any references.

We also need to update controller-runtime to v0.15.0 to work with v0.27.2. This brings with it another breaking change for the NewDynamicRESTMapper, which is adopted as well.

k8s-ci-robot commented 1 year ago

Welcome @SimonTheLeg!

It looks like this is your first PR to kubernetes-sigs/cli-utils 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/cli-utils has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. :smiley:

k8s-ci-robot commented 1 year ago

Hi @SimonTheLeg. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
ash2k commented 1 year ago

/ok-to-test

SimonTheLeg commented 1 year ago

Failures are due to controller runtime not yet supporting v0.27.1, which will be achieved once this is merged: https://github.com/kubernetes-sigs/controller-runtime/pull/2189

side-note from me: the controller-runtime PR is dependent on a new release in api-machinery, which has to include this commit https://github.com/kubernetes/apimachinery/commit/c2d59b04c24bc18402faa19abe31d4a6675a982a

So I guess the only option is to wait for now..

ash2k commented 1 year ago

This should be unblocked now.

SimonTheLeg commented 1 year ago

yes! I have updated the PR. One thing: with controller-runtime v0.15.0 there is a breaking change in NewDynamicRESTMapper, which now requires an http.Client to passed in. From reading the associated PR https://github.com/kubernetes-sigs/controller-runtime/pull/2122, I understand that it is mainly done to re-use clients. Since we don't really need to do this I just went with the defaultClient for now. @ash2k I would appreciate if you could take a look, if I understood this correctly. Thanks! :)

ash2k commented 1 year ago

@SimonTheLeg Thanks for updating the PR! I think controller-runtime is often a source of issues. There is no reason not to use vanilla client-go/etc. I think we should drop this dependency for everybody's benefit.

I think default client is fine because it's only used in tests.

/lgtm

Not sure I can lgtm.

ash2k commented 1 year ago

/retest

SimonTheLeg commented 1 year ago

CI complains about a deprecation notice on the NewExponentialBackoffManager() func. I have tried to apply what is suggested in the deprecation notice, but it leads to compiler errors. For now I have opened an issue https://github.com/kubernetes/kubernetes/issues/118638

ash2k commented 1 year ago

@SimonTheLeg Is it possible to silence the linter and use the deprecated API for now until the upstream issue is fixed? I hope we don't need to wait for it to be resolved.

SimonTheLeg commented 1 year ago

/retest

SimonTheLeg commented 1 year ago

I have added the nolint, and cli-utils-presubmit-master is now working. The other two CI jobs though seam to fail during or shortly after cluster creation. And from a first look, I don't think this is related to any change in this PR. Could someone have a look?

mortent commented 1 year ago

@SimonTheLeg I suspect this might be related to https://github.com/kubernetes/test-infra/pull/29742, but I will try to take a look.

mortent commented 1 year ago

Actually, looks like this is something that has been going on for a while: https://github.com/kubernetes-sigs/cli-utils/issues/620

mortent commented 1 year ago

I looked into this and I think the failures here are probably related to some of the changes in the PR. I have a WIP PR to debug the tests and it doesn't have the same issue. It is not obvious to me what the cause might be though. Are you able to run the e2e tests (make test-e2e) locally on your machine?

SimonTheLeg commented 1 year ago

I think I found the problem. Before I just passed http.DefaultClient to NewDynamicRestmapper. But this does not really work, because the default client does not accept the self signed certificates from the kind cluster. Now with the HTTPClientFor this should work

SimonTheLeg commented 1 year ago

/retest

ash2k commented 1 year ago

/retest

mortent commented 1 year ago

I think there is a problem with the stress tests that is not related to the changes here. It looks like the prow pod just gets killed, so my guess is that it is using more resources than is allowed.

ash2k commented 1 year ago

/retest

ash2k commented 1 year ago

@mortent I think something else might be going on. Here are the final lines from the logs of the latest run:

  STEP: Verify inventory deleted @ 07/18/23 06:01:51.823
  STEP: Verify 1000 CronTabs deleted @ 07/18/23 06:01:51.825
  STEP: Verify 1000 ConfigMaps deleted @ 07/18/23 06:01:51.826
  STEP: Verify 1000 Namespaces deleted @ 07/18/23 06:01:51.828
  STEP: Verify CRD deleted @ 07/18/23 06:01:51.829
  STEP: Cleanup ConfigMaps @ 07/18/23 06:01:51.832
  STEP: Cleanup Namespaces @ 07/18/23 06:01:51.833
• [175.712 seconds]
------------------------------
[AfterSuite] 
/home/prow/go/src/sigs.k8s.io/cli-utils/test/stress/stress_test.go:75
[AfterSuite] PASSED [0.003 seconds]
------------------------------
Ran 3 of 3 Specs in 1378.344 seconds
SUCCESS! -- 3 Passed | 0 Failed | 0 Pending | 0 Skipped
PASS
You're using deprecated Ginkgo functionality:
=============================================
  --ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo.  This feature has proved to be more noisy than useful.  You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.
To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=2.9.5
Ginkgo ran 1 suite in 23m53.528413139s
Test Suite Passed
+ EXIT_VALUE=0
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Error response from daemon: Could not kill running container 753db20ac63aada47a06c08bb560c748ea2731e1bb9bced39ef24bf419db351c, cannot remove - tried to kill container, but did not receive an exit event
Error response from daemon: Could not kill running container 87203d58796e4141d56ec8031e1e074439b96029a30b3f307f0512aa5dd739a5, cannot remove - tried to kill container, but did not receive an exit event
Error response from daemon: Could not kill running container d26abce62bc8b8da5c4a97104faeaaed98cc05b3501b47f4949858fe8be460d6, cannot remove - tried to kill container, but did not receive an exit event
Error response from daemon: Could not kill running container 5b32a2aaa7b874ef9fdb3b06ab26d8f7555537a46618b9a68cee5d2f68a8991a, cannot remove - tried to kill container, but did not receive an exit event
Error response from daemon: Could not kill running container 39d9f5c32cd93f004f1e14c5180ffb382d565233b7a6a5d5a5dd49d5bdd45a68, cannot remove - tried to kill container, but did not receive an exit event
Error response from daemon: Could not kill running container 08442bb9e92de8cebce642578c5033f75ad6b9e4d8e50da663e0bb6b4184b0d0, cannot remove - tried to kill container, but did not receive an exit event
Error response from daemon: Could not kill running container b978bb1ea3f972975491ca5d878754203d319f358c7f3d229263177687360ce1, cannot remove - tried to kill container, but did not receive an exit event
Error response from daemon: Could not kill running container 2da012b6c0b20b7b2b4317efc484a8d09d2c9a2acb146024a96cc26b629fce12, cannot remove - tried to kill container, but did not receive an exit event
Error response from daemon: Could not kill running container b690f3bd9889787cc4ac09c9f5095f6f15fc862315198a386671d2c1b60e8eb3, cannot remove - tried to kill container, but did not receive an exit event
Error response from daemon: Could not kill running container 2289e514f0e68497b027ceee89137ecdd27fc22a111ea3ce0c080ebbb6e49fe3, cannot remove - tried to kill container, but did not receive an exit event
Error response from daemon: Could not kill running container ba9918f48b4412e1ec8e30c911fd2a9f0f37b1a9894bb4cb7de5e16fe2052dff, cannot remove - tried to kill container, but did not receive an exit event
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
================================================================================
Done cleaning up after docker in docker. 

That page actually says passed:

Test started today at 3:36 PM passed after 30m7s.

Job execution failed: Pod got deleted unexpectedly

I wonder if this is related to migration to a different cluster in https://github.com/kubernetes/test-infra/pull/29742.

ash2k commented 1 year ago

If we run this job for master, will it succeed?

ash2k commented 1 year ago

I see it's failing in https://github.com/kubernetes-sigs/cli-utils/pull/621 and everywhere on the dashboard https://prow.k8s.io/job-history/gs/kubernetes-jenkins/pr-logs/directory/cli-utils-presubmit-master-stress.

ash2k commented 1 year ago

/retest

ash2k commented 1 year ago

@mortent @karlkfi Could you merge this please? Any blockers?

Without going into too much detail, there is a CVE in docker, which is a transitive dependency in my project, and I cannot update it because that also requires Kubernetes 1.27 libraries, which I cannot use because cli-utils wouldn't compile (see the diff in this MR for breaking changes). I could use the branch sha, but it'd be nicer to just get this merged.

seans3 commented 1 year ago

looks good to me. I will approve shortly, unless someone has an objection.

OAN: DryRunVerifier is no longer needed

Since the API Server has understood server-side apply since v1.13, we can assume all current servers will correctly handle the server-side dry-run parameter.
mortent commented 1 year ago

/approve

k8s-ci-robot commented 1 year ago

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: ash2k, mortent, SimonTheLeg

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files: - ~~[OWNERS](https://github.com/kubernetes-sigs/cli-utils/blob/master/OWNERS)~~ [mortent] Approvers can indicate their approval by writing `/approve` in a comment Approvers can cancel approval by writing `/approve cancel` in a comment
seans3 commented 1 year ago

/retest

ash2k commented 1 year ago

/retest

ash2k commented 1 year ago

/retest

ash2k commented 1 year ago

"/home/prow/go/bin/golangci-lint" run ./... make: *** [Makefile:86: lint] Killed

Hm....

ash2k commented 1 year ago

/retest

k8s-ci-robot commented 1 year ago

@SimonTheLeg: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
cli-utils-presubmit-master 852a8803968cbe50967e40d6dddafd9c32291515 link true /test cli-utils-presubmit-master

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository. I understand the commands that are listed [here](https://go.k8s.io/bot-commands).
k8s-ci-robot commented 1 year ago

PR needs rebase.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
ash2k commented 1 year ago

@SimonTheLeg I've incorporated these changes into https://github.com/kubernetes-sigs/cli-utils/pull/629, which got merged. Thank you!

/close

k8s-ci-robot commented 1 year ago

@ash2k: Closed this PR.

In response to [this](https://github.com/kubernetes-sigs/cli-utils/pull/619#issuecomment-1672328708): >@SimonTheLeg I've incorporated these changes into https://github.com/kubernetes-sigs/cli-utils/pull/629, which got merged. Thank you! > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.