Open haodeon opened 1 year ago
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Haven’t found an answer yet but issue #17107 appears to be a duplicate of this one.
Have tested @juparog’s workaround by adding another name server, like 8.8.8.8
to /etc/resolv.conf
. After minikube stop + start, CoreDNS is able to resolve lookups.
/remove-lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
It looks like this may affect rootless Docker as well. See https://github.com/kubernetes/minikube/issues/18667#issuecomment-2121495029
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
What Happened?
I am doing some testing with kind and minikube. I have run into a k8s networking issue which I am hoping someone with more experience can help explain.
I am running rootless podman on Pop_OS! which is based off Ubuntu. When I create a cluster using minikube, pod networking is not working. I initially thought this was a DNS issue but then when I looked at the coredns pod logs it tells me that it cannot reach
192.168.49.1:53: i/o timeout
.After further debugging I discover that by enabling
net.ipv4.ip_forward
it all works after recreating the cluster.What is confusing to me is when I use kind to create a cluster it works just fine without
net.ipv4.ip_forward
being enabled.From what I can tell minikube and kind are both using kindnet by default. I tried the bridge cni in minikube and it wouldn't work regardless of IP forwarding being enabled or disabled.
Is it intended to require
net.ipv4.ip_forward
to be enabled for minikube with rootless podman to work?How is kind able to work with
net.ipv4.ip_forward
disabled?Attach the log file
log.txt
Operating System
Ubuntu
Driver
Podman