Open levenleven opened 8 months ago
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: levenleven Once this PR has been reviewed and has the lgtm label, please assign mortent for approval. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Welcome @levenleven!
It looks like this is your first PR to kubernetes-sigs/cli-utils 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.
You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.
You can also check if kubernetes-sigs/cli-utils has its own contribution guidelines.
You may want to refer to our testing guide if you run into trouble with your tests not passing.
If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!
Thank you, and welcome to Kubernetes. :smiley:
Hi @levenleven. Thanks for your PR.
I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test
on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test
label.
I understand the commands that are listed here.
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the PR is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
FWIW, we mostly avoid this problem by disabling client-side throttling and depending on server-side throttling to handle slowing down requests. If you do the same, you shouldn't ever see any rate: Wait(n=%d) would exceed context deadline
errors.
Unfortunately, that error doesn't have a type, which makes it fragile to catch. And unwrapping only once is also fragile. You would need to either fully unwrap recursively or check for a partial match in the full error message to catch all the edge cases.
That said, what is the behavior you're looking for? Won't it still be status unknown if the context is cancelled?
/ok-to-test
@karlkfi Thanks for looking into this!
That said, what is the behavior you're looking for? Won't it still be status unknown if the context is cancelled?
No, the poller would keep the previous known status. With current behavior the error is swallowed and the status is overridden with Unknown
.
You would need to either fully unwrap recursively or check for a partial match in the full error message to catch all the edge cases.
If this has a chance to be accepted I can do that 🙂
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the PR is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
We often bump into race condition where resource status won't be read because context is reaching the deadline. This results in
Unknown
status being returned.This PRs treats this edge case the same way as
context.Canceled
andcontext.DeadlineExceeded
are treated.