Closed x80486 closed 2 years ago
Are you absolutely certain that nothing on the machine is already bound to that port? That's usually the cause of this error.
As to your warnings, I wouldn't worry overly about not having Kata; we should probably drop that from WARN to INFO level.
Yeah, I checked that initially; this is how I do it: sudo netstat -tulpn | grep :9080
...but nothing is getting printed out. This is sort of an all-in-all:
[x80486@archbook:~]$ sudo netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:45653 0.0.0.0:* LISTEN 46369/standard-note
tcp 0 0 127.0.0.1:6942 0.0.0.0:* LISTEN 74805/java
tcp 0 0 127.0.0.1:63342 0.0.0.0:* LISTEN 74805/java
tcp6 0 0 127.0.0.1:4332 :::* LISTEN 42092/java
udp 0 0 223.0.0.251:5353 0.0.0.0:* 133217/opera --type
udp 0 0 223.0.0.251:5353 0.0.0.0:* 133217/opera --type
udp 0 0 223.0.0.251:5353 0.0.0.0:* 20724/chromium
udp 0 0 223.0.0.251:5353 0.0.0.0:* 20758/chromium --ty
udp6 0 0 de81::6dfe:4114:c3d:416 :::* 505/NetworkManager
...but now that you mention it, I'll restart and see what happens :thinking:
OK, I don't know for sure what happened, but I restarted and got the same problem.
So I removed Nix and the containers/
sub-directories from .config/
and .local/share
; restarted, installed Nix again with all the packages, reconfigured everything...and :drum: I don't have the error anymore :partying_face: :balloon:
You can leave this one open if you want to. I did execute the same command to see if some process was using that port again before triggering "operation remove all", but at least with that command, nothing was bound to 9080
,
Reopen if it happens again.
It happened again. A fresh podman 4.1.0 install, using a rootless environment. Was creating and deleting docker.io/postgres:alpine
and my Java Spring app over and over using podman-compose
until eventually ran into this error:
Error: rootlessport listen tcp 0.0.0.0:8080: bind: address already in use
Postgres is published on port 5432 and my app is on 8080. Postgres is occasionally correctly created (exit code 0), while the app always exit code 126.
Here's the relevant part of the error wall:
podman run --name=core -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=core --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=core --label com.docker.compose.project.working_dir=/home/selamba/IdeaProjects/core --label com.docker.compose.project.config_files=docker-compose.local.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=core --env-file /home/selamba/IdeaProjects/core/.env.example --net core_core-and-db --network-alias core -p 8080:8080 core
Error: rootlessport listen tcp 0.0.0.0:8080: bind: address already in use
exit code: 126
podman start core
Error: unable to start container "9596f626f307ad7b7f5a9819a3d46fccc3eac641c8733856c60ca98884568c11": rootlessport listen tcp 0.0.0.0:8080: bind: address already in use
exit code: 125
I did some diagnostics:
# ps -ax | grep $(fuser 8080/tcp)
92320 pts/2 Sl 0:00 rootlessport
Seems like this rootlessport was supposed to be destroyed with the container itself, but wasn't. I decided to look at the entire process list and was horrified:
...
56979 ? S 0:00 podman
...
92215 pts/2 Sl 0:02 podman start -a core
92320 pts/2 Sl 0:00 rootlessport
92331 pts/2 Sl 0:00 rootlessport-child
92344 ? Ssl 0:00 /usr/bin/conmon --api-version 1 -c 7410059df508530e2a8bf18b2ed2d70be7b59d7afd7f5d5d6f6ca1c8b9e1e650 -u 7410059df508530e2a8bf18b2ed2d70be7b59d7afd7f5d5d6f6ca1c8b9
92347 ? Ss 0:00 sh ./entrypoint.sh
92350 ? Sl 0:27 java -cp app:app/lib/* io.roadmaps.core.Application
...
102760 pts/2 Sl 0:00 rootlessport
102774 pts/2 Sl 0:00 rootlessport-child
102783 ? Ssl 0:00 /usr/bin/conmon --api-version 1 -c 60e5ba5e5777b302b252c2e4cd86970006b6108d208991839b3779042c4e0524 -u 60e5ba5e5777b302b252c2e4cd86970006b6108d208991839b3779042c
102786 ? Ss 0:00 postgres
102920 pts/2 Sl 0:00 /usr/lib/podman/aardvark-dns --config /run/user/1000/containers/networks/aardvark-dns -p 53 run
102971 ? Ss 0:00 postgres: checkpointer
102972 ? Ss 0:00 postgres: background writer
102973 ? Ss 0:00 postgres: walwriter
102974 ? Ss 0:00 postgres: autovacuum launcher
102975 ? Ss 0:00 postgres: stats collector
102976 ? Ss 0:00 postgres: logical replication launcher
...
sh ./entrypoint.sh
is the ENTRYPOINT
in my app's Containerfile and java -cp ...
is a command from entrypoint.sh
I would like to also point out process 56979. Looks like a podman daemon.
56979 is the rootless pause process - it's holding open the rootless user namespace.
Reopening, given we seem to have a cause - rootlessport not exiting with the container.
Will it be better to make this a separate issue?
This does not look like an issue with rootlessport. If the full process tree with conmon is still there then of course rootlessport will not exit. Can you provide a reproducer and create a new issue? I assume either podman-compose or podman will not stop/kill the previous container correctly.
happening to me as well, had to write a an extra script to ss -tlpn
and kill the port assigned to the container, as we use ipv6 each container has a pretty unique address.
I am seeing the same thing - podman v4.6.2 / F38 Once I kill rootlessport and retry the poduman run - container comes up
Still a thing, I'm pretty sure that it has to do with residual binding from previous containers. This happened to me when deleting and re-creating containers multiple times. I've made sure to delete the containers but there seems to be something residual with the binding of ports left behind.
Is this a BUG REPORT or FEATURE REQUEST?
/kind bug
Description
I'm using
podman 1.9.3
under Arch LinuxLinux uplink 5.4.47-1-lts #1 SMP Wed, 17 Jun 2020 19:42:02 +0000 x86_64 GNU/Linux
. I installed it via Nix packages. I'm building an image withbuildah
and that's fine, but whenever I try to run the container it always fails withfailed to expose ports via rootlessport: "listen tcp 0.0.0.0:9080: bind: address already in use
. I can't seem to spot the issue even when I run the command with--log-level=debug
(as I've seen in some issues here).There is also
WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument
; I saw that upgrading tofuse-overlayfs 1.0
should solve it but I havefuse-overlayfs-1.1.0
.Steps to reproduce the issue:
podman-wrapper
andbuildah-wrapper
Describe the results you received:
Describe the results you expected:
The container should run successfully! :partying_face:
Additional information you deem important (e.g. issue happens only occasionally):
Since I installed
podman
via Nix, I had to create manually:/etc/subuid
/etc/subgid
~/.config/containers/registries.conf
~/.config/containers/policy.json
Here is the output from EVERYTHING (including
podman
version...hold on!):Package info (e.g. output of
rpm -q podman
orapt list podman
):