Closed micheljung closed 1 year ago
Interestingly, it only happened when specifying
--network
, even if I specify--network podman
which I assumed to be the default network if none is specified.
--network slirp4nents is the default as rootless so that makes sense
Once that happens do all containers fail to connect? Please check the outout from podman unshare --rootless-netns ip addr
and check if the slirp4nents process is running.
As it happened:
podman run --rm --name xxxxx_fix-certificate-issue-11_systemtest --network xxxxx_fix-certificate-issue-11_default docker.example.com/xxxx-jenkins/build-container-rhel8-java11 bash -c './gradlew test'
Exception in thread "main" java.net.UnknownHostException: nexus.example.com
$ podman unshare --rootless-netns ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
24: podman3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 2a:50:71:09:83:5a brd ff:ff:ff:ff:ff:ff
inet 10.89.2.1/24 brd 10.89.2.255 scope global podman3
valid_lft forever preferred_lft forever
inet6 fe80::90e6:efff:fe98:a375/64 scope link
valid_lft forever preferred_lft forever
25: vethb35b7269@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman3 state UP group default qlen 1000
link/ether 36:85:46:ed:e7:28 brd ff:ff:ff:ff:ff:ff link-netnsid 11
inet6 fe80::3485:46ff:feed:e728/64 scope link
valid_lft forever preferred_lft forever
26: veth3b522327@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman3 state UP group default qlen 1000
link/ether ce:db:33:d0:b5:a7 brd ff:ff:ff:ff:ff:ff link-netnsid 8
inet6 fe80::ccdb:33ff:fed0:b5a7/64 scope link
valid_lft forever preferred_lft forever
27: veth8d9d5929@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman3 state UP group default qlen 1000
link/ether 8e:24:83:3c:95:e2 brd ff:ff:ff:ff:ff:ff link-netnsid 9
inet6 fe80::8c24:83ff:fe3c:95e2/64 scope link
valid_lft forever preferred_lft forever
28: vethe411db57@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman3 state UP group default qlen 1000
link/ether a2:b9:52:3c:a3:ba brd ff:ff:ff:ff:ff:ff link-netnsid 10
inet6 fe80::a0b9:52ff:fe3c:a3ba/64 scope link
valid_lft forever preferred_lft forever
30: vethf1172a83@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman3 state UP group default qlen 1000
link/ether e6:fb:d0:55:a5:71 brd ff:ff:ff:ff:ff:ff link-netnsid 12
inet6 fe80::e4fb:d0ff:fe55:a571/64 scope link
valid_lft forever preferred_lft forever
31: veth36b53aaa@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman3 state UP group default qlen 1000
link/ether 2a:50:71:09:83:5a brd ff:ff:ff:ff:ff:ff link-netnsid 13
inet6 fe80::2850:71ff:fe09:835a/64 scope link
valid_lft forever preferred_lft forever
32: veth7314528a@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman3 state UP group default qlen 1000
link/ether ee:da:2f:e4:02:c0 brd ff:ff:ff:ff:ff:ff link-netnsid 14
inet6 fe80::ecda:2fff:fee4:2c0/64 scope link
valid_lft forever preferred_lft forever
33: vethba0ad71@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman3 state UP group default qlen 1000
link/ether 4e:1d:90:86:3c:d2 brd ff:ff:ff:ff:ff:ff link-netnsid 16
inet6 fe80::4c1d:90ff:fe86:3cd2/64 scope link
valid_lft forever preferred_lft forever
34: podman2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 0e:84:89:57:ee:16 brd ff:ff:ff:ff:ff:ff
inet 10.89.1.1/24 brd 10.89.1.255 scope global podman2
valid_lft forever preferred_lft forever
inet6 fe80::3cb7:e2ff:fe04:7a3f/64 scope link
valid_lft forever preferred_lft forever
35: vetha47c1814@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman2 state UP group default qlen 1000
link/ether ee:27:32:27:0c:06 brd ff:ff:ff:ff:ff:ff link-netnsid 17
inet6 fe80::ec27:32ff:fe27:c06/64 scope link
valid_lft forever preferred_lft forever
36: vethf6c40f9a@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman2 state UP group default qlen 1000
link/ether 5e:c2:63:80:28:d8 brd ff:ff:ff:ff:ff:ff link-netnsid 6
inet6 fe80::5cc2:63ff:fe80:28d8/64 scope link
valid_lft forever preferred_lft forever
37: vethbd7d96ee@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman2 state UP group default qlen 1000
link/ether fa:0f:87:92:64:72 brd ff:ff:ff:ff:ff:ff link-netnsid 7
inet6 fe80::f80f:87ff:fe92:6472/64 scope link
valid_lft forever preferred_lft forever
38: vethc62a9409@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman2 state UP group default qlen 1000
link/ether 0e:84:89:57:ee:16 brd ff:ff:ff:ff:ff:ff link-netnsid 5
inet6 fe80::c84:89ff:fe57:ee16/64 scope link
valid_lft forever preferred_lft forever
40: veth4035563e@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman2 state UP group default qlen 1000
link/ether c2:7b:52:d3:87:6b brd ff:ff:ff:ff:ff:ff link-netnsid 15
inet6 fe80::c07b:52ff:fed3:876b/64 scope link
valid_lft forever preferred_lft forever
41: veth7caeeefd@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman2 state UP group default qlen 1000
link/ether 3e:0f:04:ab:e8:d0 brd ff:ff:ff:ff:ff:ff link-netnsid 18
inet6 fe80::3c0f:4ff:feab:e8d0/64 scope link
valid_lft forever preferred_lft forever
42: veth915b4498@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman2 state UP group default qlen 1000
link/ether ba:e7:a2:b7:1b:3c brd ff:ff:ff:ff:ff:ff link-netnsid 3
inet6 fe80::b8e7:a2ff:feb7:1b3c/64 scope link
valid_lft forever preferred_lft forever
43: vethc8c00999@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman2 state UP group default qlen 1000
link/ether 2a:d8:4d:4f:0f:82 brd ff:ff:ff:ff:ff:ff link-netnsid 4
inet6 fe80::28d8:4dff:fe4f:f82/64 scope link
valid_lft forever preferred_lft forever
Sometimes, we also face the issue that one container says "no route to host" to another container. Maybe that's a different issue, maybe it's not.
Sometimes it works, sometimes it can't resolve external hosts, sometimes it can not resolve or reach internal hosts.
I'm just having another case of "Could not resolve host" between containers:
$ podman network rm -f test_xxxxx
time="2022-09-26T10:24:02+02:00" level=error msg="Failed to kill slirp4netns process: no such process"
test_xxxxx
$ podman network create test_xxxxx
test_xxxxx
$ podman run -d --rm --network test_xxxxx --name nginx_xxxxx library/nginx:1.21.6-alpine
0238222e882200d59af3021ccef78d0897c471216cd6c1f37fedb60565ddf779
$ podman network ls
NETWORK ID NAME DRIVER
aaa234db3d66 xxxxx_fix-certificate-issue-13_default bridge
46d620e9d8bb yyyyy_local-ports-6_default bridge
2f259bab93aa podman bridge
4d7fe80d83f9 test_xxxxx bridge
$ podman run --rm --network test_xxxxx nginx:1.21.6-alpine curl http://nginx_xxxxx
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (6) Could not resolve host: nginx_xxxxx
$ podman container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0238222e8822 docker.example.com/nginx:1.21.6-alpine nginx -g daemon o... 1 second ago Up 2 seconds ago nginx_xxxxx
$ podman unshare --rootless-netns ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: tap0: <BROADCAST,UP,LOWER_UP> mtu 65520 qdisc fq_codel state UNKNOWN group default qlen 1000
link/ether de:ac:3f:06:7c:77 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.100/24 brd 10.0.2.255 scope global tap0
valid_lft forever preferred_lft forever
inet6 fd00::dcac:3fff:fe06:7c77/64 scope global tentative dynamic mngtmpaddr
valid_lft 86400sec preferred_lft 14400sec
inet6 fe80::dcac:3fff:fe06:7c77/64 scope link
valid_lft forever preferred_lft forever
3: podman3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 3e:f9:83:8b:e4:95 brd ff:ff:ff:ff:ff:ff
inet 10.89.2.1/24 brd 10.89.2.255 scope global podman3
valid_lft forever preferred_lft forever
inet6 fe80::80cc:72ff:fe59:898b/64 scope link tentative
valid_lft forever preferred_lft forever
4: veth8be11969@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman3 state UP group default qlen 1000
link/ether 3e:f9:83:8b:e4:95 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::3cf9:83ff:fe8b:e495/64 scope link
valid_lft forever preferred_lft forever
This was repeatable until I executed podman network prune -f
and ran it again, then it worked. However, this one still failed:
$ podman run --rm --network test_xxxxx xxxx-jenkins/build-container-rhel8-java11 curl https://nexus.example.com/
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:43 --:--:-- 0curl: (6) Could not resolve host: nexus.example.com
After running podman stop -a && podman container prune -f && podman network prune -f
, it worked again.
In your fist comment the tap0 interface is missing which means that the slirp4nents process was killed or crashed. Looks like you use jenkins, I remember problems with jenkins just killing our processes. For the second one I assume the aardvark-dns process was killed, likely the same problem.
I suggest you monitor those processes, it is very likely that something is killing them.
Thank you very much @Luap99 for your valuable input!
What I don't understand yet is in which scenario Podman decides to start slirp4netns. When creating the first container? And does it then assume that the process is already running if one container is running?
One might expect that the process is started whenever it's not running.
Podman creates the rootless netns when it does not exists (including starting slirp4netns). As long as the rootless netns is still there (bind mount under XDG_RUNTIME_DIR/netns/rootless-netns...) we expect that everything is working. You have to stop all containers with a bridge network, only then podman cleanups the netns (and kills slirp4netns).
Thanks again, I will look into it.
I tried starting podman with XDG_RUNTIME_DIR=$(realpath ./podman/run)
but this gets me failed to mount runtime directory for rootless netns: no such file or directory or Failed to mount runtime directory for rootless netns
Maybe I need to ask this: Is there a recommended way to use Podman in CI (Jenkins) that completely isolates the containers, networks, slirp4netns process etc. from each other? (and if yes, how? ;-))
If you run --network slirp4netns you will have one slirp process per container, in this case the containers cannot communicate with each other than via forwarded host ports.
unfortunately, that's not a solution because the whole point is to have a separate network with N services (podman compose) and no open host ports as they are problematic on a CI system
A friendly reminder that this issue had no activity for 30 days.
In https://github.com/containers/podman/issues/7816#issuecomment-1250630813 we had been pointed at https://github.com/jcarrano/wg-podman
Does it seem feasible to adapt this example with a userspace Wireguard ¹ implementation ², if Wireguard Kernel support is not available in CI, and try again?
A friendly reminder that this issue had no activity for 30 days.
@Luap99 I think this is waiting for a response from you.
There is nothing we can do in podman to prevent this. TL;DR do not kill the slirp4netns process.
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
Sometimes, e.g. when running
curl
in a container, it fails to resolve hostnames. I think it's not a DNS issue but a general networking problem. This only happens occasionally, and apparently only if there are already some containers/networks. Interestingly, it only happened when specifying--network
, even if I specify--network podman
which I assumed to be the default network if none is specified.Steps to reproduce the issue:
I don't know how to reproduce this reliably, just start a bunch of containers and networks and at some point it goes boom. And when it does, it can always be reproduced like this:
Describe the results you received:
curl: (6) Could not resolve host: github.com
(this issue isn't limited tocurl
, it happens with any connection)Describe the results you expected:
curl
(or any connection) works.Additional information you deem important (e.g. issue happens only occasionally):
It happens occasionally, especially when there are networks/containers. Once it's in "broken" state, it always happens. After running
podman stop -a && podman container prune -f && podman network prune -f
, it doesn't happen.Findings
I collected and compared debug output to compare what's happening. Here are my findings:
--network podman
when it's working--network podman
when it's brokenMade network namespace at /run/user/1108/netns/netns-*
creating rootless network namespace
is logged/usr/bin/slirp4netns
is executedchain NETAVARK_FORWARD created on table filter
loggedchain NETAVARK_FORWARD exists on table filter
loggedLogs
Show
* I remove the timestamps and replaced the container ID with `{containerId}` for easier diff * I didn't actually use `curlimages/curl` but a custom image * In the 2nd and 3rd log, I reduced some lines that were the same as in the 1st log to `[...]` because GitHub limits me to 65536 characters. `podman run --log-level=debug --rm curlimages/curl curl -m 2 https://github.com` (no `--network`, always works) ``` level=info msg="podman filtering at log level debug" level=debug msg="Called run.PersistentPreRunE(podman run --log-level=debug --rm curlimages/curl curl -m 2 https://github.com)" level=debug msg="Merged system config \"/usr/share/containers/containers.conf\"" level=debug msg="Merged system config \"/etc/containers/containers.conf\"" level=debug msg="Using conmon: \"/usr/bin/conmon\"" level=debug msg="Initializing boltdb state at /home/jenkins/.local/share/containers/storage/libpod/bolt_state.db" level=debug msg="Overriding run root \"/run/user/1108/containers\" with \"/run/user/1108/xdgruntime/containers\" from database" level=debug msg="Overriding tmp dir \"/run/user/1108/libpod/tmp\" with \"/run/user/1108/xdgruntime/libpod/tmp\" from database" level=debug msg="Using graph driver overlay" level=debug msg="Using graph root /home/jenkins/.local/share/containers/storage" level=debug msg="Using run root /run/user/1108/xdgruntime/containers" level=debug msg="Using static dir /home/jenkins/.local/share/containers/storage/libpod" level=debug msg="Using tmp dir /run/user/1108/xdgruntime/libpod/tmp" level=debug msg="Using volume path /home/jenkins/.local/share/containers/storage/volumes" level=debug msg="Set libpod namespace to \"\"" level=debug msg="Not configuring container store" level=debug msg="Initializing event backend file" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" level=info msg="Setting parallel job count to 25" level=info msg="podman filtering at log level debug" level=debug msg="Called run.PersistentPreRunE(podman run --log-level=debug --rm curlimages/curl curl -m 2 https://github.com)" level=debug msg="Merged system config \"/usr/share/containers/containers.conf\"" level=debug msg="Merged system config \"/etc/containers/containers.conf\"" level=debug msg="Using conmon: \"/usr/bin/conmon\"" level=debug msg="Initializing boltdb state at /home/jenkins/.local/share/containers/storage/libpod/bolt_state.db" level=debug msg="Overriding run root \"/run/user/1108/containers\" with \"/run/user/1108/xdgruntime/containers\" from database" level=debug msg="Overriding tmp dir \"/run/user/1108/libpod/tmp\" with \"/run/user/1108/xdgruntime/libpod/tmp\" from database" level=debug msg="Using graph driver overlay" level=debug msg="Using graph root /home/jenkins/.local/share/containers/storage" level=debug msg="Using run root /run/user/1108/xdgruntime/containers" level=debug msg="Using static dir /home/jenkins/.local/share/containers/storage/libpod" level=debug msg="Using tmp dir /run/user/1108/xdgruntime/libpod/tmp" level=debug msg="Using volume path /home/jenkins/.local/share/containers/storage/volumes" level=debug msg="Set libpod namespace to \"\"" level=debug msg="[graphdriver] trying provided driver \"overlay\"" level=debug msg="Cached value indicated that overlay is supported" level=debug msg="Cached value indicated that overlay is supported" level=debug msg="Cached value indicated that metacopy is not being used" level=debug msg="Cached value indicated that native-diff is usable" level=debug msg="backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" level=debug msg="Initializing event backend file" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" level=info msg="Setting parallel job count to 25" level=debug msg="Pulling image curlimages/curl (policy: missing)" level=debug msg="Looking up image \"curlimages/curl\" in local containers storage" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" level=debug msg="Loading registries configuration \"/etc/containers/registries.conf\"" level=debug msg="Loading registries configuration \"/etc/containers/registries.conf.d/000-shortnames.conf\"" level=debug msg="Loading registries configuration \"/etc/containers/registries.conf.d/001-rhel-shortnames.conf\"" level=debug msg="Loading registries configuration \"/etc/containers/registries.conf.d/002-rhel-shortnames-overrides.conf\"" level=debug msg="Trying \"harbor.example.com/curlimages/curl:latest\" ..." level=debug msg="parsed reference into \"[overlay@/home/jenkins/.local/share/containers/storage+/run/user/1108/xdgruntime/containers]@1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b\"" level=debug msg="Found image \"curlimages/curl\" as \"harbor.example.com/curlimages/curl:latest\" in local containers storage" level=debug msg="Found image \"curlimages/curl\" as \"harbor.example.com/curlimages/curl:latest\" in local containers storage ([overlay@/home/jenkins/.local/share/containers/storage+/run/user/1108/xdgruntime/containers]@1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b)" level=debug msg="Looking up image \"harbor.example.com/curlimages/curl:latest\" in local containers storage" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" level=debug msg="Trying \"harbor.example.com/curlimages/curl:latest\" ..." level=debug msg="parsed reference into \"[overlay@/home/jenkins/.local/share/containers/storage+/run/user/1108/xdgruntime/containers]@1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b\"" level=debug msg="Found image \"harbor.example.com/curlimages/curl:latest\" as \"harbor.example.com/curlimages/curl:latest\" in local containers storage" level=debug msg="Found image \"harbor.example.com/curlimages/curl:latest\" as \"harbor.example.com/curlimages/curl:latest\" in local containers storage ([overlay@/home/jenkins/.local/share/containers/storage+/run/user/1108/xdgruntime/containers]@1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b)" level=debug msg="Looking up image \"curlimages/curl\" in local containers storage" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" level=debug msg="Trying \"harbor.example.com/curlimages/curl:latest\" ..." level=debug msg="parsed reference into \"[overlay@/home/jenkins/.local/share/containers/storage+/run/user/1108/xdgruntime/containers]@1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b\"" level=debug msg="Found image \"curlimages/curl\" as \"harbor.example.com/curlimages/curl:latest\" in local containers storage" level=debug msg="Found image \"curlimages/curl\" as \"harbor.example.com/curlimages/curl:latest\" in local containers storage ([overlay@/home/jenkins/.local/share/containers/storage+/run/user/1108/xdgruntime/containers]@1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b)" level=debug msg="Inspecting image 1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b" level=debug msg="exporting opaque data as blob \"sha256:1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b\"" level=debug msg="exporting opaque data as blob \"sha256:1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b\"" level=debug msg="exporting opaque data as blob \"sha256:1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b\"" level=debug msg="Inspecting image 1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b" level=debug msg="Inspecting image 1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b" level=debug msg="Inspecting image 1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b" level=debug msg="using systemd mode: false" level=debug msg="No hostname set; container's hostname will default to runtime default" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\"" level=debug msg="Allocated lock 87 for container {containerId}" level=debug msg="parsed reference into \"[overlay@/home/jenkins/.local/share/containers/storage+/run/user/1108/xdgruntime/containers]@1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b\"" level=debug msg="exporting opaque data as blob \"sha256:1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b\"" level=debug msg="Cached value indicated that overlay is not supported" level=debug msg="Check for idmapped mounts support " level=debug msg="Created container \"{containerId}\"" level=debug msg="Container \"{containerId}\" has work directory \"/home/jenkins/.local/share/containers/storage/overlay-containers/{containerId}/userdata\"" level=debug msg="Container \"{containerId}\" has run directory \"/run/user/1108/xdgruntime/containers/overlay-containers/{containerId}/userdata\"" level=debug msg="Not attaching to stdin" level=debug msg="Made network namespace at /run/user/1108/netns/netns-328dc113-ef42-e2af-29cf-a7c1f614c120 for container {containerId}" level=debug msg="[graphdriver] trying provided driver \"overlay\"" level=debug msg="Cached value indicated that overlay is supported" level=debug msg="Cached value indicated that overlay is supported" level=debug msg="Cached value indicated that metacopy is not being used" level=debug msg="backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" level=debug msg="Cached value indicated that volatile is being used" level=debug msg="overlay: mount_data=lowerdir=/home/jenkins/.local/share/containers/storage/overlay/l/KK7OCHWKB6ELXPVGO7ATFD6XEV:/home/jenkins/.local/share/containers/storage/overlay/l/47MRODP3P7LS5S6MZIRZKOZBI7:/home/jenkins/.local/share/containers/storage/overlay/l/E2XGRN6TUGWD3USS6K2ASNVG25:/home/jenkins/.local/share/containers/storage/overlay/l/G7OLM7Y73VVE7TXKPAZEJOYJBD:/home/jenkins/.local/share/containers/storage/overlay/l/WIJ2LTKW2TVXAUZY377NXZOA6K:/home/jenkins/.local/share/containers/storage/overlay/l/OCYKCOGXFWYBRGF63L7F77WQCS:/home/jenkins/.local/share/containers/storage/overlay/l/HITP76V5JUWFIEO4WD4NAUBO3B:/home/jenkins/.local/share/containers/storage/overlay/l/37MKERMYZTH44IRAXFXJN7KRSP:/home/jenkins/.local/share/containers/storage/overlay/l/YCC4WC6DMPWCFOBJE5NAXI2YZA:/home/jenkins/.local/share/containers/storage/overlay/l/BIUM4RYTKZUYTQ4GZWQHBHKO23:/home/jenkins/.local/share/containers/storage/overlay/l/YPYJR6HUTT6ZAL4QAXSPVCEY2B:/home/jenkins/.local/share/containers/storage/overlay/l/O5XRDVQQEH3KBRU32GVXEPYT5W:/home/jenkins/.local/share/containers/storage/overlay/l/LNDUZ5B7JJVMSVBZP22VGW6OI2:/home/jenkins/.local/share/containers/storage/overlay/l/MM5MPACLHWZM2G2YUBY7ZCRDN4:/home/jenkins/.local/share/containers/storage/overlay/l/MDLZVMMGJW7S7EUMV6OSJQ3D4S:/home/jenkins/.local/share/containers/storage/overlay/l/EOCCREDRGCLXMYJUFIANOVK4RD:/home/jenkins/.local/share/containers/storage/overlay/l/QHLYT44NSGTELQMQKYGSNNWTWW:/home/jenkins/.local/share/containers/storage/overlay/l/OS2N5REXJQ4JBMXNLQSPBMUZTA:/home/jenkins/.local/share/containers/storage/overlay/l/QCBHNIWEK4D3CC2EWRX3IUM7IK,upperdir=/home/jenkins/.local/share/containers/storage/overlay/ea0822cd63cdaeeacc8dd16fcf6b6311cb0c6c1309f58e17230b9ea450b3995e/diff,workdir=/home/jenkins/.local/share/containers/storage/overlay/ea0822cd63cdaeeacc8dd16fcf6b6311cb0c6c1309f58e17230b9ea450b3995e/work,,userxattr,volatile,context=\"system_u:object_r:container_file_t:s0:c519,c635\"" level=debug msg="Mounted container \"{containerId}\" at \"/home/jenkins/.local/share/containers/storage/overlay/ea0822cd63cdaeeacc8dd16fcf6b6311cb0c6c1309f58e17230b9ea450b3995e/merged\"" level=debug msg="Created root filesystem for container {containerId} at /home/jenkins/.local/share/containers/storage/overlay/ea0822cd63cdaeeacc8dd16fcf6b6311cb0c6c1309f58e17230b9ea450b3995e/merged" level=debug msg="slirp4netns command: /usr/bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp --enable-ipv6 -c -e 3 -r 4 --netns-type=path /run/user/1108/netns/netns-328dc113-ef42-e2af-29cf-a7c1f614c120 tap0" level=debug msg="Not modifying container {containerId} /etc/passwd" level=debug msg="Not modifying container {containerId} /etc/group" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d" level=debug msg="Workdir \"/\" resolved to host path \"/home/jenkins/.local/share/containers/storage/overlay/ea0822cd63cdaeeacc8dd16fcf6b6311cb0c6c1309f58e17230b9ea450b3995e/merged\"" level=debug msg="Created OCI spec for container {containerId} at /home/jenkins/.local/share/containers/storage/overlay-containers/{containerId}/userdata/config.json" level=debug msg="/usr/bin/conmon messages will be logged to syslog" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c {containerId} -u {containerId} -r /usr/bin/runc -b /home/jenkins/.local/share/containers/storage/overlay-containers/{containerId}/userdata -p /run/user/1108/xdgruntime/containers/overlay-containers/{containerId}/userdata/pidfile -n youthful_perlman --exit-dir /run/user/1108/xdgruntime/libpod/tmp/exits --full-attach -l k8s-file:/home/jenkins/.local/share/containers/storage/overlay-containers/{containerId}/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/1108/xdgruntime/containers/overlay-containers/{containerId}/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/jenkins/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1108/xdgruntime/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1108/xdgruntime/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /home/jenkins/.local/share/containers/storage/volumes --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg {containerId}]" level=info msg="Failed to add conmon to cgroupfs sandbox cgroup: error creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/conmon: permission denied" [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied level=debug msg="Received: 493865" level=info msg="Got Conmon PID as {PID}" level=debug msg="Created container {containerId} in OCI runtime" level=debug msg="Attaching to container {containerId}" level=debug msg="Starting container {containerId} with command [curl -m 2 https://github.com]" level=debug msg="Started container {containerId}" level=info msg="Received shutdown.Stop(), terminating!" PID=493834 level=debug msg="Enabling signal proxying" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 8139 100 8139 0 0 305k 0 --:--:-- --:--:-- --:--:-- 305k level=debug msg="Checking if container {containerId} should restart" level=debug msg="Removing container {containerId}" level=debug msg="Cleaning up container {containerId}" level=debug msg="Tearing down network namespace at /run/user/1108/netns/netns-328dc113-ef42-e2af-29cf-a7c1f614c120 for container {containerId}" level=debug msg="Successfully cleaned up container {containerId}" level=debug msg="Unmounted container \"{containerId}\"" level=debug msg="Removing all exec sessions for container {containerId}" level=debug msg="Container {containerId} storage is already unmounted, skipping..." level=debug msg="Called run.PersistentPostRunE(podman run --log-level=debug --rm curlimages/curl curl -m 2 https://github.com)" ``` `podman run --log-level=debug --rm --network podman curlimages/curl curl -m 2 https://github.com` **When it's working** ``` level=info msg="podman filtering at log level debug" level=debug msg="Called run.PersistentPreRunE(podman run --log-level=debug --rm --network podman curlimages/curl curl -m 2 https://github.com.example.com)" level=debug msg="Merged system config \"/usr/share/containers/containers.conf\"" [...] level=info msg="podman filtering at log level debug" level=debug msg="Called run.PersistentPreRunE(podman run --log-level=debug --rm --network podman curlimages/curl curl -m 2 https://github.com)" level=debug msg="Merged system config \"/usr/share/containers/containers.conf\"" [...] level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\"" level=debug msg="Successfully loaded network pt-10630-inbucket-3_default: &{pt-10630-inbucket-3_default 0904bef0d91840788509069c47804cc66dd334a1c71cf10a66b85e966b832ec0 bridge podman1 2022-09-21 23:40:15.239870328 +0200 CEST [{{{10.89.0.0 ffffff00}} 10.89.0.1Output of
podman version
:Output of
podman info
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)
Yes, I checked the Podman Troubleshooting Guide No, I didn't test 4.2.1 because there's no RPM in the RHEL8 repositories yet, but from the changelog it's unlikely that this issue was fixed.
Additional environment details (AWS, VirtualBox, physical, etc.):
Some VM, I guess it's irrelevant.