Closed rohanKanojia closed 3 weeks ago
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: Once this PR has been reviewed and has the lgtm label, please assign gbraad for approval. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Hi @rohanKanojia. Thanks for your PR.
I'm waiting for a crc-org member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test
on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test
label.
I understand the commands that are listed here.
/ok-to-test
Could anyone please help me understand CI failures in the Windows-QE pipeline? Could it be a flaky failure? From the GitHub action logs, it seems that an action failed to generate a report. I'm not entirely sure whether these failures are related to changes made in this pull request.
@rohanKanojia I would say they are related to something else, since these two pipelines fail for me too in #4343 .
@adrianriobo and @lilyLuLiu can help you with this
CI failures in the Windows-QE pipeline failed in copy test resource to target machine, this is qe related, not because of this pr. @adrianriobo we need to improve the failure handing for the deliverset.
@lilyLuLiu : Is there any open issue to track this?
@rohanKanojia https://github.com/adrianriobo/deliverest/issues/50
@rohanKanojia: The following test failed, say /retest
to rerun all failed tests or /retest-required
to rerun all mandatory failed tests:
Test name | Commit | Details | Required | Rerun command |
---|---|---|---|---|
ci/prow/e2e-crc | cdc863f2290d6d11ca57ea9711d2376c0465f1cd | link | true | /test e2e-crc |
Full PR test history. Your PR dashboard.
Description
Fix #1569
At the moment, we are only cleaning up crc context from kubeconfig during
crc delete
. This can be problematic if user tries to run any cluster related command after runningcrc stop
as kubeconfig still points to CRC cluster that is not active.I checked minikube's behavior and noticed it's cleaning up kube config in case of both stop and delete commands. Make crc behavior consistent with minikube and perform kubeconfig cleanup in both sub commands.
Signed-off-by: Rohan Kumar rohaan@redhat.com
Type of change
Checklist
Fixes: Issue #1569
Relates to: Issue #1569
Solution/Idea
Clean up
.kube/config
file while doingcrc stop
in order to not leave kubeconfig in an inconsistent state.Currently after crc stop
.kube/config
file is left pointing to an outdated kube-context :This results in timeouts on the client side when user tries to access cluster using any kube client
oc
/kubectl
:This pull request would clean up
.kube/config
to align crc behavior with minikube so that it fails fast now: Trying to access cluster after crc stopProposed changes
Add a call to
cleanKubeconfig
instop.go
to clean up kubeconfig while stopping cluster.Testing
In order to test this branch you need to follow these steps:
make cross
to buildcrc
binary./out/linux-amd64/crc setup
./out/linux-amd64/crc start
./out/linux-amd64/crc stop
.kube/config
is cleaned up aftercrc stop
kubectl
/oc
it fails fast: