Open xrstf opened 1 month ago
Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: xrstf Once this PR has been reviewed and has the lgtm label, please assign krzyzacy for approval. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Name | Link |
---|---|
Latest commit | 68c7dc43d3141ec8b598e5b81cc4a37364f6d92f |
Latest deploy log | https://app.netlify.com/sites/k8s-prow/deploys/6730d84b21ecf30007595a84 |
Deploy Preview | https://deploy-preview-293--k8s-prow.netlify.app |
Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify site configuration.
/test all
/test all
/test all
/test all
/test all
/test all
/cc
PR needs rebase.
This PR brings Prow up-to-speed with the latest Kubernetes and controller-runtime dependencies, plus a few more changes to make these new dependencies work.
controller-tools 0.16.4
Without this update, codegen would fail:
golangci-lint 1.58.0
After updating code-generator, staticcheck suddenly threw false positives like:
However looking at the code, the
help == nil
check is leading to at.Fatal
, which should be recognized by staticcheck. I have no idea why this suddenly happened, but updating to the next highest golangci-lint version fixes the issue.Flakiness due to rate limitting
I noticed some tests flaking a lot and started digging. It turns out the issue wasn't actually from loops timing out or contexts getting cancelled, but from the client-side rate limitting that is enabled in the kube clients. I think during integration tests it doesn't make much sense to have rate limitting, as this would mean a lot of code potentially has to handle errors arising from it.
I have therefore disabled the rate limiter by setting
cfg.RateLimiter = flowcontrol.NewFakeAlwaysRateLimiter()
in the integration test utility code.Deck re-run tests
These tests have been reworked quite a bit, as they were quite flaky. The issue ultimately boiled down to the old code sorting ProwJobs by ResourceVersion, but during testing I found that it happens quite a lot that ProwJobs are created/updated nearly simultaneously. This has been resolved by sorting the ProwJobs by CreationTimestamp instead, which is unaffected by update calls.
However that is nearly the smallest change in the refactoring.
wait.PollUntilContextTimeout
. It's IMO unnecessary to have a back-off mechanism in integration tests like this. It just needlessly slows down the test.The "rotate Deployment instead of deleting Pods manually"-method has been applied to all other integration tests.