Closed telometto closed 5 months ago
Please follow the standard issue template. We need at least the podman info
output. ARe you running as root or rootless?
Also what is the exact error you are seeing?
In general port 53 is problematic as this might be already used, i.e. see https://github.com/containers/podman/issues/19108#issuecomment-1619920541
However port 67,68 should not be effected by that.
Please follow the standard issue template. We need at least the
podman info
output.
I'm not at home atm, but will do once I get back 👍
Are you running as root or rootless?
It's root because my router does not allow me to set custom ports.
Also what is the exact error you are seeing?
Will report back with the exact message when I get back home, but it basically says that port 53 is bound even though I've disabled any services that might use it.
In general port 53 is problematic as this might be already used, i.e. see https://github.com/containers/podman/issues/19108#issuecomment-1619920541
Yes, but disabling everything related to it should, in theory, work. Especially considering that doing exactly the same with Docker instead of Podman works without any issues.
In general port 53 is problematic as this might be already used, i.e. see https://github.com/containers/podman/issues/19108#issuecomment-1619920541
Yes, but disabling everything related to it should, in theory, work. Especially considering that doing exactly the same with Docker instead of Podman works without any issues.
Podman uses 53 for aardvark-dns by default as described in the linked comment so when you bind 0.0.0.0:53 it will cause conflict when you network has dns enabled.
You can fix this in your compose yaml by setting a host ip to bind, e.g. 127.0.0.1:53:53/udp
. Or use the dns_port setting in containers.conf.
But well that only applies if you only have problem with 53, but since you mentioned other ports as well there should not be a problem.
Here's the output of podman info
:
homeserver% podman info
host:
arch: amd64
buildahVersion: 1.28.2
cgroupControllers:
- cpu
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon_2.1.6+ds1-1_amd64
path: /usr/bin/conmon
version: 'conmon version 2.1.6, commit: unknown'
cpuUtilization:
idlePercent: 98.6
systemPercent: 0.39
userPercent: 1.01
cpus: 12
distribution:
codename: lunar
distribution: ubuntu
version: "23.04"
eventLogger: journald
hostname: homeserver
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 6.2.0-26-generic
linkmode: dynamic
logDriver: journald
memFree: 29331845120
memTotal: 33469702144
networkBackend: netavark
ociRuntime:
name: crun
package: crun_1.8-1_amd64
path: /usr/bin/crun
version: |-
crun version 1.8
commit: 0356bf4aff9a133d655dc13b1d9ac9424706cac4
rundir: /run/user/1000/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
os: linux
remoteSocket:
exists: true
path: /run/user/1000/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns_1.2.0-1_amd64
version: |-
slirp4netns version 1.2.0
commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
libslirp: 4.7.0
SLIRP_CONFIG_VERSION_MAX: 4
libseccomp: 2.5.4
swapFree: 8589930496
swapTotal: 8589930496
uptime: 86h 29m 50.00s (Approximately 3.58 days)
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
volume:
- local
registries:
search:
- registry.fedoraproject.org
- registry.access.redhat.com
- registry.centos.org
- docker.io
store:
configFile: /home/zeno/.config/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: vfs
graphOptions: {}
graphRoot: /home/zeno/.local/share/containers/storage
graphRootAllocated: 61075263488
graphRootUsed: 16875999232
graphStatus: {}
imageCopyTmpDir: /var/tmp
imageStore:
number: 0
runRoot: /run/user/1000/containers
volumePath: /home/zeno/.local/share/containers/storage/volumes
version:
APIVersion: 4.3.1
Built: 0
BuiltTime: Thu Jan 1 00:00:00 1970
GitCommit: ""
GoVersion: go1.20.2
Os: linux
OsArch: linux/amd64
Version: 4.3.1
Here's the exact error message:
...
Error: cannot listen on the UDP port: listen udp4 :53: bind: address already in use
exit code: 126
podman start adguardhome
Error: unable to start container "7b7ee60229c13a346a82ec4bc35969e9784959f303bc7da727af918f390c478d": cannot listen on the UDP port: listen udp4 :53: bind: address already in use
exit code: 125
You can fix this in your compose yaml by setting a host ip to bind, e.g.
127.0.0.1:53:53/udp
.
I tried binding the port (I tried again now) earlier but it doesn't seem to work, since traffic doesn't get sent through to the container.
Or use the
dns_port
setting in containers.conf.
I don't see a dns_port
option; do you mean dns_bind_port
? If so, how do I use it?
Again, what I find odd is that running exactly the same compose file using docker-compose up -d
(instead of podman-compose up -d
) works just fine.
A friendly reminder that this issue had no activity for 30 days.
bump
Having the same issue. For me podman
itself has the port (as per lsof
) and I need to restart the podman process to make it work
@telometto I've been facing the same issue these last days trying to deploy Pi-Hole as a container.
In my case I wanted to use Quadlets to do so (you can find the code here if you're interested) and when I started the generated service, it failed with the very same exception. Until I tried running the container directly from the command line (with podman run ....
) and then it worked.
I then checked the network used by the container when ran from the command line and found what @Luap99 mentioned: the default network has DNS disabled and it looks like for networks created via quadlets the default is to have DNS enabled. I just had to manually disable it for the pihole network and that was it.
My guessing is that podman-compose
might be generating a custom network for the service and using the same default as Quadlets?
Maybe you could try to either run the service directly via podman run ...
and see what happens or try to reuse my quadlets files to AdGuard :crossed_fingers:
As mentioned above if you use a custom network our dns server will try to bind 53 on the bridge ip so if you define -p 53:53/udp this causes a conflict,
I recommend you just set dns_bind_port, see the contians.conf docs https://github.com/containers/common/blob/main/docs/containers.conf.5.md
Discussed in https://github.com/containers/podman/discussions/19519