Closed knabben closed 7 months ago
Running on WSL2, it seems the CI is passing and do not have this issue. Will close it after confirmation
@knabben may be related to this https://github.com/kubernetes-sigs/kpng/blob/340d8d0d4f2b16076cef1927d749ef63486624ad/hack/test_e2e.sh#L289-L316
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
What kind of issue is this?
/king bug
Expected behaviour
To have DNS working when using hack/test_e2e.sh to create a cluster
Actual behaviour
CoreDNS pods are down after the first call to DNS
Steps to reproduce the problem
Run
hack/test_e2e.sh
and do a first DNS call, observe pods trying to resolve the DNS and go into not ready state