containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
22.36k stars 2.31k forks source link

When a container hosting a DNS Service is running on a user created network, container name resolution fails #23128

Open Zivodor opened 3 days ago

Zivodor commented 3 days ago

Issue Description

Attempting to run dnsmasq or Technitium in a container while using a user created network with dns enabled causes container name resolution to fail while it the container is running.

I have not been able to find documentation about this "feature" or functionality nor a work around for having both my own dns service running while also allowing automatic resolution of the container names to their ip addresses.

Steps to reproduce the issue

  1. Create 3 containers, 1 hosting dnsmasq, 2 hosting any other service.
  2. Create a user defined network with dns enabled
  3. Start the 2 other containers and attach them to the network
  4. Run the command nslookup container2 on container1, the resolution is successful
  5. Start the dnsmasq container and attach it to the network
  6. Run the command nslookup container2 on container1

Describe the results you received

The dns resolution fails and the ip address is not resolved.

Describe the results you expected

The dns resolution is successful and the ip address is resolved.

podman info output

host:
  arch: amd64
  buildahVersion: 1.33.7
  cgroupControllers:
  - cpu
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon_2.1.6+ds1-1_amd64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.6, commit: unknown'
  cpuUtilization:
    idlePercent: 99.76
    systemPercent: 0.06
    userPercent: 0.18
  cpus: 8
  databaseBackend: sqlite
  distribution:
    codename: bookworm
    distribution: debian
    version: "12"
  eventLogger: journald
  freeLocks: 2015
  hostname: project-hydra
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1001
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1001
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
  kernel: 6.1.0-21-amd64
  linkmode: dynamic
  logDriver: journald
  memFree: 15922044928
  memTotal: 16628264960
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns_1.4.0-3_amd64
      path: /usr/lib/podman/aardvark-dns
      version: aardvark-dns 1.4.0
    package: netavark_1.4.0-3_amd64
    path: /usr/lib/podman/netavark
    version: netavark 1.4.0
  ociRuntime:
    name: crun
    package: crun_1.8.1-1+deb12u1_amd64
    path: /usr/bin/crun
    version: |-
      crun version 1.8.1
      commit: f8a096be060b22ccd3d5f3ebe44108517fbf6c30
      rundir: /run/user/1001/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt_0.0~git20230309.7c7625d-1_amd64
    version: |
      pasta unknown version
      Copyright Red Hat
      GNU Affero GPL version 3 or later <https://www.gnu.org/licenses/agpl-3.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/user/1001/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns_1.2.0-1_amd64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.4
  swapFree: 1023406080
  swapTotal: 1023406080
  uptime: 1h 13m 15.00s (Approximately 0.04 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - docker.io
  - ghcr.io
store:
  configFile: /home/podman/.config/containers/storage.conf
  containerStore:
    number: 8
    paused: 0
    running: 1
    stopped: 7
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/podman/.local/share/containers/storage
  graphRootAllocated: 196682272768
  graphRootUsed: 9006194688
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 38
  runRoot: /run/user/1001/containers
  transientStore: false
  volumePath: /home/podman/.local/share/containers/storage/volumes
version:
  APIVersion: 4.9.4
  Built: 0
  BuiltTime: Wed Dec 31 17:00:00 1969
  GitCommit: ""
  GoVersion: go1.22.1
  Os: linux
  OsArch: linux/amd64
  Version: 4.9.4

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

I am running this on a Debian 12.5 headless server.

Additional information

I want to use a custom dns server to allow me to resolve a local domain instead of having to use the server's local ip address. I don't understand what or why the dns resolution is being disabled when the dnsmasq container is active. I have not been able to find documentation on having both the built in dns resolution as well as a custom dns resolution. Ideally I do not want to assign static ips to these containers and configure dnsmasq to know as this would make adding new services annoying and it seems unnecessary.

Luap99 commented 2 days ago
 version: aardvark-dns 1.4.0

That is outdated. We only support the latest versions upstream.

In general I don't understand how you configure the containers, please provide exact commands. Setting up up a dns server inside should in now way effect the aardvark-dns running on the host. Are you sure you do not bypass the aardvark-dns resolver ip set in resolv.conf in the container?

Zivodor commented 2 days ago

I am using podman-compose to run everything, but will be switching to Quadlets when this is resolved. I ran these commands myself and it made no difference.

podman network create wgnet --subnet=10.89.2.0/24

podman pod create --name=pod_nextcloud --infra=false

podman run --name=nextcloud_db -d --pod=pod_nextcloud --label PODMAN_SYSTEMD_UNIT=podman-compose@nextcloud.service -v /home/podman/appdata/nextcloud/mysql:/var/lib/mysql:z --network=wgnet --network-alias=db --restart always --uidmap 0:11000:1000 docker.io/mariadb:latest --transaction-isolation=READ-COMMITTED --log-bin=binlog --binlog-format=ROW

podman run --name=nextcloud -d --pod=pod_nextcloud --label PODMAN_SYSTEMD_UNIT=podman-compose@nextcloud.service -v /data/nextcloud:/var/www/html:z -v /home/podman/appdata/nextcloud/nextcloud-apache-optimization.conf:/etc/httpd/conf.d/nextcloud-apache-optimization.conf:z --network=wgnet --network-alias=app -p 11000:80 --restart always --uidmap 0:11000:1000 docker.io/nextcloud:latest

podman run --name=nextcloud_redis -d --pod=pod_nextcloud --label PODMAN_SYSTEMD_UNIT=podman-compose@nextcloud.service --network=wgnet --network-alias=redis --restart always --uidmap 0:11000:1000 docker.io/redis:latest

podman pod create --name=pod_dashy --infra=false

podman run --name=dashy -d --pod=pod_dashy PODMAN_SYSTEMD_UNIT=podman-compose@dashy.service -v /home/podman/appdata/dashy/my-conf.yml:/app/user-data/conf.yml:Z --network=wgnet --network-alias=dashy -p 4000:8080 --uidmap 0:4000:1000 lissy93/dashy:latest

podman pod create --name=pod_technitium --infra=false

podman run --name=technitium -d --pod=pod_technitium --label PODMAN_SYSTEMD_UNIT=podman-compose@technitium.service --cap-add NET_ADMIN --cap-add NET_RAW -v /home/podman/appdata/technitium/data:/app/data --network=wgnet --network-alias=technitium -p 53:53/tcp -p 53:53/udp -p 5380:5380 --restart unless-stopped --uidmap 0:1000:2000 docker.io/technitium/dns-server

Below are the three test I performed on the Dashy container. I ran these through the terminal provided by Cockpit for containers.

This first image happens before I started the Technitium container, it is the same result regardless of whether its Technitium or dnsmasq. image

This second image happens after I start the Technitium container. image

This third image shows the resolv.conf, it is identical before and after enabling the Technitium container. image

I will try updating aadvark-dns to see if that resolves the issue.