containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
23.12k stars 2.36k forks source link

Broken support for dnsConfig in play kube? #20562

Open fpoirotte opened 10 months ago

fpoirotte commented 10 months ago

Issue Description

I'm running a pod using podman play kube where one of the containers is providing DNS resolution for the other containers. Therefore, I want to be able to set the DNS nameservers to 127.0.0.1 in the pod's configuration. However, the setting seems to be ignored (cat /etc/resolv.conf in any of the containers only shows the default nameserver set by podman instead of my own).

If I instead create the pod by hand, everything works as expected. In addition:

It seems to me like #9132 only added support for dnsConfig in podman inspect & podman generate kube, and that podman play kube still does not respect the pod's configuration.

Steps to reproduce the issue

Steps to reproduce the issue

  1. Create the following pod manifest:
$ cat demo.yml 
apiVersion: v1
kind: Pod
metadata:
  name: demo
spec:
  restartPolicy: Never

  dnsConfig:
    nameservers:
    - "127.0.0.1"

  containers:
  - name: demo
    image: docker.io/library/alpine:3.18
    command: ["/bin/sleep", "30"]
  1. Play it and display the content of /etc/resolv.conf:
    podman play kube demo.yml > /dev/null && \
    podman exec -it demo-demo cat /etc/resolv.conf && \
    podman pod rm -t 0 -f --latest > /dev/null

Describe the results you received

search dns.podman
nameserver 10.89.0.1

Describe the results you expected

nameserver 127.0.0.1

podman info output

host:
  arch: amd64
  buildahVersion: 1.32.0
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.7-2.fc38.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.7, commit: '
  cpuUtilization:
    idlePercent: 46.31
    systemPercent: 12.34
    userPercent: 41.35
  cpus: 8
  databaseBackend: boltdb
  distribution:
    distribution: fedora
    variant: workstation
    version: "38"
  eventLogger: file
  hostname: retracted
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1234
      size: 1
    - container_id: 1
      host_id: 7000000
      size: 8665536
    uidmap:
    - container_id: 0
      host_id: 1234
      size: 1
    - container_id: 1
      host_id: 7000000
      size: 8665536
  kernel: 6.5.7-200.fc38.x86_64
  linkmode: dynamic
  logDriver: k8s-file
  memFree: 1075367936
  memTotal: 16605478912
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.8.0-1.fc38.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.8.0
    package: netavark-1.8.0-2.fc38.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.8.0
  ociRuntime:
    name: crun
    package: crun-1.11-1.fc38.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.11
      commit: 11f8d3dc9fc4bb8a0adcff5ba8bd340f24612701
      rundir: /run/user/1234/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20231004.gf851084-1.fc38.x86_64
    version: |
      pasta 0^20231004.gf851084-1.fc38.x86_64
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/user/1234/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.2-1.fc38.x86_64
    version: |-
      slirp4netns version 1.2.2
      commit: 0ee2d87523e906518d34a6b423271e4826f71faf
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 10342772736
  swapTotal: 17179860992
  uptime: 291h 57m 19.00s (Approximately 12.12 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  docker.io:
    Blocked: false
    Insecure: false
    Location: registry-1.docker.io
    MirrorByDigestOnly: false
    Mirrors:
    - Insecure: false
      Location: localhost/proxy.docker.io
      PullFromMirror: ""
    Prefix: docker.io
    PullFromMirror: ""
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /home/retracted/.config/containers/storage.conf
  containerStore:
    number: 28
    paused: 0
    running: 3
    stopped: 25
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/retracted/.local/share/containers/storage
  graphRootAllocated: 229198450688
  graphRootUsed: 209565687808
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 324
  runRoot: /run/user/1234/containers
  transientStore: false
  volumePath: /home/retracted/.local/share/containers/storage/volumes
version:
  APIVersion: 4.7.0
  Built: 1695839078
  BuiltTime: Wed Sep 27 20:24:38 2023
  GitCommit: ""
  GoVersion: go1.20.8
  Os: linux
  OsArch: linux/amd64
  Version: 4.7.0

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

No

Additional environment details

Host: Fedora Linux 38 with podman installed from official repositories

Additional information

n/a

Luap99 commented 10 months ago

This seems to have nothing to do with kube at all. This is just how our regular dns setup works. Because by default play kube pods are run on a network (podman-default-kube-network) with dns enabled we use aardvark-dns for it and the upstream resolvers are then only used by aardvark-dns and the container itself uses the aardvark-dns ip as resolver. This is required to allow name resolution for container names.

If you want to use your own dns inside the container you should use a network without dns enabled (i.e. the default podman network), in this case we just write all servers directly into the containers resolv.conf.

Compare

$ podman run --rm --dns 127.0.0.1 --network podman  alpine cat /etc/resolv.conf 
search fritz.box
nameserver 127.0.0.1
$ podman run --rm --dns 127.0.0.1 --network podman-default-kube-network  alpine cat /etc/resolv.conf 
search dns.podman
nameserver 10.89.2.1
fpoirotte commented 10 months ago

Thanks for the clarification. Indeed, podman play kube gives the expected result after I add --network podman to the command line.

Maybe this could be added to the FAQ (but this does not really look like a frequently asked question). Otherwise, feel free to close this issue.

Luap99 commented 10 months ago

I agree that can/should be documented better. Where do you expect this to be documented?

github-actions[bot] commented 9 months ago

A friendly reminder that this issue had no activity for 30 days.

tcurdt commented 8 months ago

I also ran into this and was stumped.

But to me the real question is: Why does aardvark-dns not resolve? Using the internal resolver I am getting:

$ cat /etc/resolv.conf 
search dns.podman
nameserver 10.89.1.1
options edns0
$ nslookup google.de 10.89.1.1
;; connection timed out; no servers could be reached

And while it might be somewhat of a workaround to disable dns, I guess the question is why isn't the resolver reachable?