Open fpoirotte opened 10 months ago
This seems to have nothing to do with kube at all. This is just how our regular dns setup works.
Because by default play kube pods are run on a network (podman-default-kube-network
) with dns enabled we use aardvark-dns for it and the upstream resolvers are then only used by aardvark-dns and the container itself uses the aardvark-dns ip as resolver. This is required to allow name resolution for container names.
If you want to use your own dns inside the container you should use a network without dns enabled (i.e. the default podman
network), in this case we just write all servers directly into the containers resolv.conf.
Compare
$ podman run --rm --dns 127.0.0.1 --network podman alpine cat /etc/resolv.conf
search fritz.box
nameserver 127.0.0.1
$ podman run --rm --dns 127.0.0.1 --network podman-default-kube-network alpine cat /etc/resolv.conf
search dns.podman
nameserver 10.89.2.1
Thanks for the clarification.
Indeed, podman play kube
gives the expected result after I add --network podman
to the command line.
Maybe this could be added to the FAQ (but this does not really look like a frequently asked question). Otherwise, feel free to close this issue.
I agree that can/should be documented better. Where do you expect this to be documented?
A friendly reminder that this issue had no activity for 30 days.
I also ran into this and was stumped.
But to me the real question is: Why does aardvark-dns
not resolve?
Using the internal resolver I am getting:
$ cat /etc/resolv.conf
search dns.podman
nameserver 10.89.1.1
options edns0
$ nslookup google.de 10.89.1.1
;; connection timed out; no servers could be reached
And while it might be somewhat of a workaround to disable dns, I guess the question is why isn't the resolver reachable?
Issue Description
I'm running a pod using
podman play kube
where one of the containers is providing DNS resolution for the other containers. Therefore, I want to be able to set the DNS nameservers to 127.0.0.1 in the pod's configuration. However, the setting seems to be ignored (cat /etc/resolv.conf
in any of the containers only shows the default nameserver set by podman instead of my own).If I instead create the pod by hand, everything works as expected. In addition:
podman generate kube
on the pod created bypodman play kube
, I can see that the settings are presentpodman pod inspect demo
, the settings are present as well under theInfraConfig.DNSServer
keypodman container inspect demo-demo
, the settings are missingpodman container inspect $(podman pod inspect --format '{{.InfraContainerID}}' demo)
, the settings are present, but if I runpodman cp $(podman pod inspect --format '{{.InfraContainerID}}' demo):/etc/resolv.conf ./
to copy the infra container'sresolv.conf
to the host machine, I can see the settings are incorrect (podman's instead of my own)It seems to me like #9132 only added support for
dnsConfig
inpodman inspect
&podman generate kube
, and thatpodman play kube
still does not respect the pod's configuration.Steps to reproduce the issue
Steps to reproduce the issue
/etc/resolv.conf
:Describe the results you received
Describe the results you expected
podman info output
Podman in a container
No
Privileged Or Rootless
Rootless
Upstream Latest Release
No
Additional environment details
Host: Fedora Linux 38 with podman installed from official repositories
Additional information
n/a