containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
23.76k stars 2.42k forks source link

Networking fails to set up correctly under unknown conditions #15899

Closed micheljung closed 1 year ago

micheljung commented 2 years ago

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Sometimes, e.g. when running curl in a container, it fails to resolve hostnames. I think it's not a DNS issue but a general networking problem. This only happens occasionally, and apparently only if there are already some containers/networks. Interestingly, it only happened when specifying --network, even if I specify --network podman which I assumed to be the default network if none is specified.

Steps to reproduce the issue:

I don't know how to reproduce this reliably, just start a bunch of containers and networks and at some point it goes boom. And when it does, it can always be reproduced like this:

podman run --rm --network podman curlimages/curl curl https://github.com/

Describe the results you received:

curl: (6) Could not resolve host: github.com (this issue isn't limited to curl, it happens with any connection)

Describe the results you expected:

curl (or any connection) works.

Additional information you deem important (e.g. issue happens only occasionally):

It happens occasionally, especially when there are networks/containers. Once it's in "broken" state, it always happens. After running podman stop -a && podman container prune -f && podman network prune -f, it doesn't happen.

Findings

I collected and compared debug output to compare what's happening. Here are my findings:

Observation No explicit network (always works) --network podman when it's working --network podman when it's broken
Made network namespace at /run/user/1108/netns/netns-* Yes Yes Yes
creating rootless network namespace is logged No Yes No ⚠
/usr/bin/slirp4netns is executed Yes Yes No ⚠
Netavark is set up and firewall rules are created No Yes Yes
chain NETAVARK_FORWARD created on table filter logged No (because no firewall rules) Yes No ⚠
chain NETAVARK_FORWARD exists on table filter logged No (because no firewall rules) No Yes ⚠

Logs

Show * I remove the timestamps and replaced the container ID with `{containerId}` for easier diff * I didn't actually use `curlimages/curl` but a custom image * In the 2nd and 3rd log, I reduced some lines that were the same as in the 1st log to `[...]` because GitHub limits me to 65536 characters. `podman run --log-level=debug --rm curlimages/curl curl -m 2 https://github.com` (no `--network`, always works) ``` level=info msg="podman filtering at log level debug" level=debug msg="Called run.PersistentPreRunE(podman run --log-level=debug --rm curlimages/curl curl -m 2 https://github.com)" level=debug msg="Merged system config \"/usr/share/containers/containers.conf\"" level=debug msg="Merged system config \"/etc/containers/containers.conf\"" level=debug msg="Using conmon: \"/usr/bin/conmon\"" level=debug msg="Initializing boltdb state at /home/jenkins/.local/share/containers/storage/libpod/bolt_state.db" level=debug msg="Overriding run root \"/run/user/1108/containers\" with \"/run/user/1108/xdgruntime/containers\" from database" level=debug msg="Overriding tmp dir \"/run/user/1108/libpod/tmp\" with \"/run/user/1108/xdgruntime/libpod/tmp\" from database" level=debug msg="Using graph driver overlay" level=debug msg="Using graph root /home/jenkins/.local/share/containers/storage" level=debug msg="Using run root /run/user/1108/xdgruntime/containers" level=debug msg="Using static dir /home/jenkins/.local/share/containers/storage/libpod" level=debug msg="Using tmp dir /run/user/1108/xdgruntime/libpod/tmp" level=debug msg="Using volume path /home/jenkins/.local/share/containers/storage/volumes" level=debug msg="Set libpod namespace to \"\"" level=debug msg="Not configuring container store" level=debug msg="Initializing event backend file" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" level=info msg="Setting parallel job count to 25" level=info msg="podman filtering at log level debug" level=debug msg="Called run.PersistentPreRunE(podman run --log-level=debug --rm curlimages/curl curl -m 2 https://github.com)" level=debug msg="Merged system config \"/usr/share/containers/containers.conf\"" level=debug msg="Merged system config \"/etc/containers/containers.conf\"" level=debug msg="Using conmon: \"/usr/bin/conmon\"" level=debug msg="Initializing boltdb state at /home/jenkins/.local/share/containers/storage/libpod/bolt_state.db" level=debug msg="Overriding run root \"/run/user/1108/containers\" with \"/run/user/1108/xdgruntime/containers\" from database" level=debug msg="Overriding tmp dir \"/run/user/1108/libpod/tmp\" with \"/run/user/1108/xdgruntime/libpod/tmp\" from database" level=debug msg="Using graph driver overlay" level=debug msg="Using graph root /home/jenkins/.local/share/containers/storage" level=debug msg="Using run root /run/user/1108/xdgruntime/containers" level=debug msg="Using static dir /home/jenkins/.local/share/containers/storage/libpod" level=debug msg="Using tmp dir /run/user/1108/xdgruntime/libpod/tmp" level=debug msg="Using volume path /home/jenkins/.local/share/containers/storage/volumes" level=debug msg="Set libpod namespace to \"\"" level=debug msg="[graphdriver] trying provided driver \"overlay\"" level=debug msg="Cached value indicated that overlay is supported" level=debug msg="Cached value indicated that overlay is supported" level=debug msg="Cached value indicated that metacopy is not being used" level=debug msg="Cached value indicated that native-diff is usable" level=debug msg="backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" level=debug msg="Initializing event backend file" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" level=info msg="Setting parallel job count to 25" level=debug msg="Pulling image curlimages/curl (policy: missing)" level=debug msg="Looking up image \"curlimages/curl\" in local containers storage" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" level=debug msg="Loading registries configuration \"/etc/containers/registries.conf\"" level=debug msg="Loading registries configuration \"/etc/containers/registries.conf.d/000-shortnames.conf\"" level=debug msg="Loading registries configuration \"/etc/containers/registries.conf.d/001-rhel-shortnames.conf\"" level=debug msg="Loading registries configuration \"/etc/containers/registries.conf.d/002-rhel-shortnames-overrides.conf\"" level=debug msg="Trying \"harbor.example.com/curlimages/curl:latest\" ..." level=debug msg="parsed reference into \"[overlay@/home/jenkins/.local/share/containers/storage+/run/user/1108/xdgruntime/containers]@1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b\"" level=debug msg="Found image \"curlimages/curl\" as \"harbor.example.com/curlimages/curl:latest\" in local containers storage" level=debug msg="Found image \"curlimages/curl\" as \"harbor.example.com/curlimages/curl:latest\" in local containers storage ([overlay@/home/jenkins/.local/share/containers/storage+/run/user/1108/xdgruntime/containers]@1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b)" level=debug msg="Looking up image \"harbor.example.com/curlimages/curl:latest\" in local containers storage" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" level=debug msg="Trying \"harbor.example.com/curlimages/curl:latest\" ..." level=debug msg="parsed reference into \"[overlay@/home/jenkins/.local/share/containers/storage+/run/user/1108/xdgruntime/containers]@1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b\"" level=debug msg="Found image \"harbor.example.com/curlimages/curl:latest\" as \"harbor.example.com/curlimages/curl:latest\" in local containers storage" level=debug msg="Found image \"harbor.example.com/curlimages/curl:latest\" as \"harbor.example.com/curlimages/curl:latest\" in local containers storage ([overlay@/home/jenkins/.local/share/containers/storage+/run/user/1108/xdgruntime/containers]@1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b)" level=debug msg="Looking up image \"curlimages/curl\" in local containers storage" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" level=debug msg="Trying \"harbor.example.com/curlimages/curl:latest\" ..." level=debug msg="parsed reference into \"[overlay@/home/jenkins/.local/share/containers/storage+/run/user/1108/xdgruntime/containers]@1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b\"" level=debug msg="Found image \"curlimages/curl\" as \"harbor.example.com/curlimages/curl:latest\" in local containers storage" level=debug msg="Found image \"curlimages/curl\" as \"harbor.example.com/curlimages/curl:latest\" in local containers storage ([overlay@/home/jenkins/.local/share/containers/storage+/run/user/1108/xdgruntime/containers]@1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b)" level=debug msg="Inspecting image 1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b" level=debug msg="exporting opaque data as blob \"sha256:1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b\"" level=debug msg="exporting opaque data as blob \"sha256:1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b\"" level=debug msg="exporting opaque data as blob \"sha256:1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b\"" level=debug msg="Inspecting image 1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b" level=debug msg="Inspecting image 1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b" level=debug msg="Inspecting image 1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b" level=debug msg="using systemd mode: false" level=debug msg="No hostname set; container's hostname will default to runtime default" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\"" level=debug msg="Allocated lock 87 for container {containerId}" level=debug msg="parsed reference into \"[overlay@/home/jenkins/.local/share/containers/storage+/run/user/1108/xdgruntime/containers]@1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b\"" level=debug msg="exporting opaque data as blob \"sha256:1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b\"" level=debug msg="Cached value indicated that overlay is not supported" level=debug msg="Check for idmapped mounts support " level=debug msg="Created container \"{containerId}\"" level=debug msg="Container \"{containerId}\" has work directory \"/home/jenkins/.local/share/containers/storage/overlay-containers/{containerId}/userdata\"" level=debug msg="Container \"{containerId}\" has run directory \"/run/user/1108/xdgruntime/containers/overlay-containers/{containerId}/userdata\"" level=debug msg="Not attaching to stdin" level=debug msg="Made network namespace at /run/user/1108/netns/netns-328dc113-ef42-e2af-29cf-a7c1f614c120 for container {containerId}" level=debug msg="[graphdriver] trying provided driver \"overlay\"" level=debug msg="Cached value indicated that overlay is supported" level=debug msg="Cached value indicated that overlay is supported" level=debug msg="Cached value indicated that metacopy is not being used" level=debug msg="backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" level=debug msg="Cached value indicated that volatile is being used" level=debug msg="overlay: mount_data=lowerdir=/home/jenkins/.local/share/containers/storage/overlay/l/KK7OCHWKB6ELXPVGO7ATFD6XEV:/home/jenkins/.local/share/containers/storage/overlay/l/47MRODP3P7LS5S6MZIRZKOZBI7:/home/jenkins/.local/share/containers/storage/overlay/l/E2XGRN6TUGWD3USS6K2ASNVG25:/home/jenkins/.local/share/containers/storage/overlay/l/G7OLM7Y73VVE7TXKPAZEJOYJBD:/home/jenkins/.local/share/containers/storage/overlay/l/WIJ2LTKW2TVXAUZY377NXZOA6K:/home/jenkins/.local/share/containers/storage/overlay/l/OCYKCOGXFWYBRGF63L7F77WQCS:/home/jenkins/.local/share/containers/storage/overlay/l/HITP76V5JUWFIEO4WD4NAUBO3B:/home/jenkins/.local/share/containers/storage/overlay/l/37MKERMYZTH44IRAXFXJN7KRSP:/home/jenkins/.local/share/containers/storage/overlay/l/YCC4WC6DMPWCFOBJE5NAXI2YZA:/home/jenkins/.local/share/containers/storage/overlay/l/BIUM4RYTKZUYTQ4GZWQHBHKO23:/home/jenkins/.local/share/containers/storage/overlay/l/YPYJR6HUTT6ZAL4QAXSPVCEY2B:/home/jenkins/.local/share/containers/storage/overlay/l/O5XRDVQQEH3KBRU32GVXEPYT5W:/home/jenkins/.local/share/containers/storage/overlay/l/LNDUZ5B7JJVMSVBZP22VGW6OI2:/home/jenkins/.local/share/containers/storage/overlay/l/MM5MPACLHWZM2G2YUBY7ZCRDN4:/home/jenkins/.local/share/containers/storage/overlay/l/MDLZVMMGJW7S7EUMV6OSJQ3D4S:/home/jenkins/.local/share/containers/storage/overlay/l/EOCCREDRGCLXMYJUFIANOVK4RD:/home/jenkins/.local/share/containers/storage/overlay/l/QHLYT44NSGTELQMQKYGSNNWTWW:/home/jenkins/.local/share/containers/storage/overlay/l/OS2N5REXJQ4JBMXNLQSPBMUZTA:/home/jenkins/.local/share/containers/storage/overlay/l/QCBHNIWEK4D3CC2EWRX3IUM7IK,upperdir=/home/jenkins/.local/share/containers/storage/overlay/ea0822cd63cdaeeacc8dd16fcf6b6311cb0c6c1309f58e17230b9ea450b3995e/diff,workdir=/home/jenkins/.local/share/containers/storage/overlay/ea0822cd63cdaeeacc8dd16fcf6b6311cb0c6c1309f58e17230b9ea450b3995e/work,,userxattr,volatile,context=\"system_u:object_r:container_file_t:s0:c519,c635\"" level=debug msg="Mounted container \"{containerId}\" at \"/home/jenkins/.local/share/containers/storage/overlay/ea0822cd63cdaeeacc8dd16fcf6b6311cb0c6c1309f58e17230b9ea450b3995e/merged\"" level=debug msg="Created root filesystem for container {containerId} at /home/jenkins/.local/share/containers/storage/overlay/ea0822cd63cdaeeacc8dd16fcf6b6311cb0c6c1309f58e17230b9ea450b3995e/merged" level=debug msg="slirp4netns command: /usr/bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp --enable-ipv6 -c -e 3 -r 4 --netns-type=path /run/user/1108/netns/netns-328dc113-ef42-e2af-29cf-a7c1f614c120 tap0" level=debug msg="Not modifying container {containerId} /etc/passwd" level=debug msg="Not modifying container {containerId} /etc/group" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d" level=debug msg="Workdir \"/\" resolved to host path \"/home/jenkins/.local/share/containers/storage/overlay/ea0822cd63cdaeeacc8dd16fcf6b6311cb0c6c1309f58e17230b9ea450b3995e/merged\"" level=debug msg="Created OCI spec for container {containerId} at /home/jenkins/.local/share/containers/storage/overlay-containers/{containerId}/userdata/config.json" level=debug msg="/usr/bin/conmon messages will be logged to syslog" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c {containerId} -u {containerId} -r /usr/bin/runc -b /home/jenkins/.local/share/containers/storage/overlay-containers/{containerId}/userdata -p /run/user/1108/xdgruntime/containers/overlay-containers/{containerId}/userdata/pidfile -n youthful_perlman --exit-dir /run/user/1108/xdgruntime/libpod/tmp/exits --full-attach -l k8s-file:/home/jenkins/.local/share/containers/storage/overlay-containers/{containerId}/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/1108/xdgruntime/containers/overlay-containers/{containerId}/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/jenkins/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1108/xdgruntime/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1108/xdgruntime/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /home/jenkins/.local/share/containers/storage/volumes --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg {containerId}]" level=info msg="Failed to add conmon to cgroupfs sandbox cgroup: error creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/conmon: permission denied" [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied level=debug msg="Received: 493865" level=info msg="Got Conmon PID as {PID}" level=debug msg="Created container {containerId} in OCI runtime" level=debug msg="Attaching to container {containerId}" level=debug msg="Starting container {containerId} with command [curl -m 2 https://github.com]" level=debug msg="Started container {containerId}" level=info msg="Received shutdown.Stop(), terminating!" PID=493834 level=debug msg="Enabling signal proxying" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 8139 100 8139 0 0 305k 0 --:--:-- --:--:-- --:--:-- 305k level=debug msg="Checking if container {containerId} should restart" level=debug msg="Removing container {containerId}" level=debug msg="Cleaning up container {containerId}" level=debug msg="Tearing down network namespace at /run/user/1108/netns/netns-328dc113-ef42-e2af-29cf-a7c1f614c120 for container {containerId}" level=debug msg="Successfully cleaned up container {containerId}" level=debug msg="Unmounted container \"{containerId}\"" level=debug msg="Removing all exec sessions for container {containerId}" level=debug msg="Container {containerId} storage is already unmounted, skipping..." level=debug msg="Called run.PersistentPostRunE(podman run --log-level=debug --rm curlimages/curl curl -m 2 https://github.com)" ``` `podman run --log-level=debug --rm --network podman curlimages/curl curl -m 2 https://github.com` **When it's working** ``` level=info msg="podman filtering at log level debug" level=debug msg="Called run.PersistentPreRunE(podman run --log-level=debug --rm --network podman curlimages/curl curl -m 2 https://github.com.example.com)" level=debug msg="Merged system config \"/usr/share/containers/containers.conf\"" [...] level=info msg="podman filtering at log level debug" level=debug msg="Called run.PersistentPreRunE(podman run --log-level=debug --rm --network podman curlimages/curl curl -m 2 https://github.com)" level=debug msg="Merged system config \"/usr/share/containers/containers.conf\"" [...] level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\"" level=debug msg="Successfully loaded network pt-10630-inbucket-3_default: &{pt-10630-inbucket-3_default 0904bef0d91840788509069c47804cc66dd334a1c71cf10a66b85e966b832ec0 bridge podman1 2022-09-21 23:40:15.239870328 +0200 CEST [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] false false true map[com.docker.compose.project:pt-10630-inbucket-3 io.podman.compose.project:pt-10630-inbucket-3] map[] map[driver:host-local]}" level=debug msg="Successfully loaded 2 networks" level=debug msg="Allocated lock 0 for container {containerId}" level=debug msg="parsed reference into \"[overlay@/home/jenkins/.local/share/containers/storage+/run/user/1108/xdgruntime/containers]@1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b\"" level=debug msg="exporting opaque data as blob \"sha256:1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b\"" level=debug msg="Cached value indicated that overlay is not supported" level=debug msg="Check for idmapped mounts support " level=debug msg="Created container \"{containerId}\"" level=debug msg="Container \"{containerId}\" has work directory \"/home/jenkins/.local/share/containers/storage/overlay-containers/{containerId}/userdata\"" level=debug msg="Container \"{containerId}\" has run directory \"/run/user/1108/xdgruntime/containers/overlay-containers/{containerId}/userdata\"" level=debug msg="Not attaching to stdin" level=debug msg="[graphdriver] trying provided driver \"overlay\"" level=debug msg="Cached value indicated that overlay is supported" level=debug msg="Cached value indicated that overlay is supported" level=debug msg="Cached value indicated that metacopy is not being used" level=debug msg="backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" level=debug msg="Cached value indicated that volatile is being used" level=debug msg="Made network namespace at /run/user/1108/netns/netns-ca2fe7e2-e421-bf53-2a4f-108ed9fb8106 for container {containerId}" level=debug msg="overlay: mount_data=lowerdir=/home/jenkins/.local/share/containers/storage/overlay/l/KK7OCHWKB6ELXPVGO7ATFD6XEV:/home/jenkins/.local/share/containers/storage/overlay/l/47MRODP3P7LS5S6MZIRZKOZBI7:/home/jenkins/.local/share/containers/storage/overlay/l/E2XGRN6TUGWD3USS6K2ASNVG25:/home/jenkins/.local/share/containers/storage/overlay/l/G7OLM7Y73VVE7TXKPAZEJOYJBD:/home/jenkins/.local/share/containers/storage/overlay/l/WIJ2LTKW2TVXAUZY377NXZOA6K:/home/jenkins/.local/share/containers/storage/overlay/l/OCYKCOGXFWYBRGF63L7F77WQCS:/home/jenkins/.local/share/containers/storage/overlay/l/HITP76V5JUWFIEO4WD4NAUBO3B:/home/jenkins/.local/share/containers/storage/overlay/l/37MKERMYZTH44IRAXFXJN7KRSP:/home/jenkins/.local/share/containers/storage/overlay/l/YCC4WC6DMPWCFOBJE5NAXI2YZA:/home/jenkins/.local/share/containers/storage/overlay/l/BIUM4RYTKZUYTQ4GZWQHBHKO23:/home/jenkins/.local/share/containers/storage/overlay/l/YPYJR6HUTT6ZAL4QAXSPVCEY2B:/home/jenkins/.local/share/containers/storage/overlay/l/O5XRDVQQEH3KBRU32GVXEPYT5W:/home/jenkins/.local/share/containers/storage/overlay/l/LNDUZ5B7JJVMSVBZP22VGW6OI2:/home/jenkins/.local/share/containers/storage/overlay/l/MM5MPACLHWZM2G2YUBY7ZCRDN4:/home/jenkins/.local/share/containers/storage/overlay/l/MDLZVMMGJW7S7EUMV6OSJQ3D4S:/home/jenkins/.local/share/containers/storage/overlay/l/EOCCREDRGCLXMYJUFIANOVK4RD:/home/jenkins/.local/share/containers/storage/overlay/l/QHLYT44NSGTELQMQKYGSNNWTWW:/home/jenkins/.local/share/containers/storage/overlay/l/OS2N5REXJQ4JBMXNLQSPBMUZTA:/home/jenkins/.local/share/containers/storage/overlay/l/QCBHNIWEK4D3CC2EWRX3IUM7IK,upperdir=/home/jenkins/.local/share/containers/storage/overlay/da651f81505aca0b7bd758f604eed55355bfb11f0d55135f453b77de78fc8282/diff,workdir=/home/jenkins/.local/share/containers/storage/overlay/da651f81505aca0b7bd758f604eed55355bfb11f0d55135f453b77de78fc8282/work,,userxattr,volatile,context=\"system_u:object_r:container_file_t:s0:c106,c139\"" level=debug msg="creating rootless network namespace with name \"rootless-netns-db3196c27706c6dbbc3a\"" level=debug msg="Mounted container \"{containerId}\" at \"/home/jenkins/.local/share/containers/storage/overlay/da651f81505aca0b7bd758f604eed55355bfb11f0d55135f453b77de78fc8282/merged\"" level=debug msg="Created root filesystem for container {containerId} at /home/jenkins/.local/share/containers/storage/overlay/da651f81505aca0b7bd758f604eed55355bfb11f0d55135f453b77de78fc8282/merged" level=debug msg="slirp4netns command: /usr/bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp --enable-ipv6 -c -r 3 --netns-type=path /run/user/1108/netns/rootless-netns-db3196c27706c6dbbc3a tap0" level=debug msg="The path of /etc/resolv.conf in the mount ns is \"/etc/resolv.conf\"" [DEBUG netavark::network::validation] "Validating network namespace..." [DEBUG netavark::commands::setup] "Setting up..." [INFO netavark::firewall] Using iptables firewall driver [DEBUG netavark::network::core_utils] Setting sysctl value for net.ipv4.ip_forward to 1 [DEBUG netavark::commands::setup] Setting up network podman with driver bridge [DEBUG netavark::network::core] Container veth name: "eth0" [DEBUG netavark::network::core] Brige name: "podman0" [DEBUG netavark::network::core] IP address for veth vector: [10.88.0.20/16] [DEBUG netavark::network::core] Gateway ip address vector: [10.88.0.1/16] [DEBUG netavark::network::core] Configured static up address for eth0 [DEBUG netavark::network::core] Container veth mac: "7e:2a:0e:1f:3d:9b" [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-1D8721804F16F created on table nat [DEBUG netavark::firewall::varktables::helpers] rule -d 10.88.0.0/16 -j ACCEPT exists on table nat and chain NETAVARK-1D8721804F16F [DEBUG netavark::firewall::varktables::helpers] rule -d 10.88.0.0/16 -j ACCEPT created on table nat and chain NETAVARK-1D8721804F16F [DEBUG netavark::firewall::varktables::helpers] rule ! -d 224.0.0.0/4 -j MASQUERADE exists on table nat and chain NETAVARK-1D8721804F16F [DEBUG netavark::firewall::varktables::helpers] rule ! -d 224.0.0.0/4 -j MASQUERADE created on table nat and chain NETAVARK-1D8721804F16F [DEBUG netavark::firewall::varktables::helpers] rule -s 10.88.0.0/16 -j NETAVARK-1D8721804F16F exists on table nat and chain POSTROUTING [DEBUG netavark::firewall::varktables::helpers] rule -s 10.88.0.0/16 -j NETAVARK-1D8721804F16F created on table nat and chain POSTROUTING [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_FORWARD created on table filter [DEBUG netavark::firewall::varktables::helpers] rule -d 10.88.0.0/16 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT exists on table filter and chain NETAVARK_FORWARD [DEBUG netavark::firewall::varktables::helpers] rule -d 10.88.0.0/16 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT created on table filter and chain NETAVARK_FORWARD [DEBUG netavark::firewall::varktables::helpers] rule -s 10.88.0.0/16 -j ACCEPT exists on table filter and chain NETAVARK_FORWARD [DEBUG netavark::firewall::varktables::helpers] rule -s 10.88.0.0/16 -j ACCEPT created on table filter and chain NETAVARK_FORWARD [DEBUG netavark::commands::setup] { "podman": StatusBlock { dns_search_domains: Some( [], ), dns_server_ips: Some( [], ), interfaces: Some( { "eth0": NetInterface { mac_address: "7e:2a:0e:1f:3d:9b", subnets: Some( [ NetAddress { gateway: Some( 10.88.0.1, ), ipnet: 10.88.0.20/16, }, ], ), }, }, ), }, } [DEBUG netavark::commands::setup] "Setup complete" level=debug msg="Adding nameserver(s) from network status of '[]'" level=debug msg="Adding search domain(s) from network status of '[]'" level=debug msg="Not modifying container {containerId} /etc/passwd" level=debug msg="Not modifying container {containerId} /etc/group" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d" level=debug msg="Workdir \"/\" resolved to host path \"/home/jenkins/.local/share/containers/storage/overlay/da651f81505aca0b7bd758f604eed55355bfb11f0d55135f453b77de78fc8282/merged\"" level=debug msg="Created OCI spec for container {containerId} at /home/jenkins/.local/share/containers/storage/overlay-containers/{containerId}/userdata/config.json" level=debug msg="/usr/bin/conmon messages will be logged to syslog" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c {containerId} -u {containerId} -r /usr/bin/runc -b /home/jenkins/.local/share/containers/storage/overlay-containers/{containerId}/userdata -p /run/user/1108/xdgruntime/containers/overlay-containers/{containerId}/userdata/pidfile -n naughty_sammet --exit-dir /run/user/1108/xdgruntime/libpod/tmp/exits --full-attach -l k8s-file:/home/jenkins/.local/share/containers/storage/overlay-containers/{containerId}/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/1108/xdgruntime/containers/overlay-containers/{containerId}/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/jenkins/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1108/xdgruntime/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1108/xdgruntime/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /home/jenkins/.local/share/containers/storage/volumes --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg {containerId}]" level=info msg="Failed to add conmon to cgroupfs sandbox cgroup: error creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/conmon: permission denied" [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied level=debug msg="Received: 730360" level=info msg="Got Conmon PID as {PID}" level=debug msg="Created container {containerId} in OCI runtime" level=debug msg="Attaching to container {containerId}" level=debug msg="Starting container {containerId} with command [curl -m 2 https://github.com]" level=debug msg="Started container {containerId}" level=info msg="Received shutdown.Stop(), terminating!" PID=730257 level=debug msg="Enabling signal proxying" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 8139 100 8139 0 0 345k 0 --:--:-- --:--:-- --:--:-- 345k level=debug msg="Checking if container {containerId} should restart" level=debug msg="Removing container {containerId}" level=debug msg="Cleaning up container {containerId}" level=debug msg="Tearing down network namespace at /run/user/1108/netns/netns-ca2fe7e2-e421-bf53-2a4f-108ed9fb8106 for container {containerId}" level=debug msg="The path of /etc/resolv.conf in the mount ns is \"/etc/resolv.conf\"" [DEBUG netavark::commands::teardown] "Tearing down.." [INFO netavark::firewall] Using iptables firewall driver [DEBUG netavark::commands::teardown] Setting up network podman with driver bridge [DEBUG netavark::network::core_utils] bridge has 1 connected interfaces [DEBUG netavark::network::core] Container veth name being removed: "eth0" [DEBUG netavark::network::core] Container veth removed: "eth0" [DEBUG netavark::commands::teardown] "Teardown complete" level=debug msg="Cleaning up rootless network namespace" level=debug msg="Successfully cleaned up container {containerId}" level=debug msg="Unmounted container \"{containerId}\"" level=debug msg="Removing all exec sessions for container {containerId}" level=debug msg="Container {containerId} storage is already unmounted, skipping..." level=debug msg="Called run.PersistentPostRunE(podman run --log-level=debug --rm --network podman curlimages/curl curl -m 2 https://github.com)" ``` `podman run --log-level=debug --rm --network podman curlimages/curl curl -m 2 https://github.com` **When it's broken** ``` level=info msg="podman filtering at log level debug" level=debug msg="Called run.PersistentPreRunE(podman run --log-level=debug --rm --network podman curlimages/curl curl -m 2 https://github.com)" level=debug msg="Merged system config \"/usr/share/containers/containers.conf\"" [...] level=debug msg="Initializing event backend file" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" level=info msg="Setting parallel job count to 25" level=info msg="podman filtering at log level debug" level=debug msg="Called run.PersistentPreRunE(podman run --log-level=debug --rm --network podman curlimages/curl curl -m 2 https://github.com)" level=debug msg="Merged system config \"/usr/share/containers/containers.conf\"" [...] level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\"" level=debug msg="Successfully loaded network debug-ports-1_default: &{debug-ports-1_default ee44c85b1182c46fddee2061a108166bd4c6c0e15d20fab56babaf1fe1e86c86 bridge podman2 2022-08-30 08:54:21.6901903 +0200 CEST [{{{10.89.1.0 ffffff00}} 10.89.1.1 }] false false true map[com.docker.compose.project:debug-ports-1 io.podman.compose.project:debug-ports-1] map[] map[driver:host-local]}" level=debug msg="Successfully loaded network debug-ports-2_default: &{debug-ports-2_default 01979019b50f6c85012096d23f7a2c693cdd381f7ccab201c597ce81ab8612b0 bridge podman4 2022-08-30 22:28:13.669394993 +0200 CEST [{{{10.89.3.0 ffffff00}} 10.89.3.1 }] false false true map[com.docker.compose.project:debug-ports-2 io.podman.compose.project:debug-ports-2] map[] map[driver:host-local]}" level=debug msg="Successfully loaded network debug-ports-3_default: &{debug-ports-3_default b9aef663f70502a9aa5ee398e2cfead2e665b3f51934fb36c3f1fec9cfb9d8b8 bridge podman8 2022-08-31 22:28:13.790223831 +0200 CEST [{{{10.89.7.0 ffffff00}} 10.89.7.1 }] false false true map[com.docker.compose.project:debug-ports-3 io.podman.compose.project:debug-ports-3] map[] map[driver:host-local]}" level=debug msg="Successfully loaded network debug-ports-4_default: &{debug-ports-4_default 0cfb42ebfced9d5d47b882a7fa09dde07864f8222747155751c14f1539d13dad bridge podman13 2022-09-01 23:00:27.450807133 +0200 CEST [{{{10.89.12.0 ffffff00}} 10.89.12.1 }] false false true map[com.docker.compose.project:debug-ports-4 io.podman.compose.project:debug-ports-4] map[] map[driver:host-local]}" level=debug msg="Successfully loaded network debug-ports-5_default: &{debug-ports-5_default 48e9bf8065a72aacc8a194b178215706e6f463a13bb7fa380fc09a009254bbec bridge podman17 2022-09-02 23:00:28.244576873 +0200 CEST [{{{10.89.16.0 ffffff00}} 10.89.16.1 }] false false true map[com.docker.compose.project:debug-ports-5 io.podman.compose.project:debug-ports-5] map[] map[driver:host-local]}" level=debug msg="Successfully loaded network logs_and_more-2_default: &{logs_and_more-2_default 337e9e910b044ae53c8530ce1d12a5268602b881bb3c7fc7c8f526cf78154f30 bridge podman5 2022-08-30 22:59:14.104826668 +0200 CEST [{{{10.89.4.0 ffffff00}} 10.89.4.1 }] false false true map[com.docker.compose.project:logs_and_more-2 io.podman.compose.project:logs_and_more-2] map[] map[driver:host-local]}" level=debug msg="Successfully loaded network logs_and_more-3_default: &{logs_and_more-3_default 0d03ab8a8d0886c6418da4f74c6a624a27b7e2933a8b60a873bc240b2f40e0a7 bridge podman9 2022-08-31 22:59:14.222022379 +0200 CEST [{{{10.89.8.0 ffffff00}} 10.89.8.1 }] false false true map[com.docker.compose.project:logs_and_more-3 io.podman.compose.project:logs_and_more-3] map[] map[driver:host-local]}" level=debug msg="Successfully loaded network logs_and_more-4_default: &{logs_and_more-4_default ae8f27685a08348d22c188510ff1148d5364249ec4d4eddc0b653e4188444fd1 bridge podman14 2022-09-01 23:36:12.470261611 +0200 CEST [{{{10.89.13.0 ffffff00}} 10.89.13.1 }] false false true map[com.docker.compose.project:logs_and_more-4 io.podman.compose.project:logs_and_more-4] map[] map[driver:host-local]}" level=debug msg="Successfully loaded network logs_and_more-5_default: &{logs_and_more-5_default 2c2bd1a9e02e1f5edca01d77bc73061a35a7932942aa048e69c8bc6ea45184fb bridge podman18 2022-09-02 23:36:13.285083917 +0200 CEST [{{{10.89.17.0 ffffff00}} 10.89.17.1 }] false false true map[com.docker.compose.project:logs_and_more-5 io.podman.compose.project:logs_and_more-5] map[] map[driver:host-local]}" level=debug msg="Successfully loaded network main-51_default: &{main-51_default adececb80f91031aea998d34f64ffcf389ff022e8d0f554acbd62fdd6997aebd bridge podman1 2022-08-30 08:42:01.01374962 +0200 CEST [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] false false true map[com.docker.compose.project:main-51 io.podman.compose.project:main-51] map[] map[driver:host-local]}" level=debug msg="Successfully loaded network main-52_default: &{main-52_default 9fa85de428730babe03e22e14d9a45fde36d7fa735c17a4e3aca28013390ccac bridge podman3 2022-08-30 21:19:13.979485069 +0200 CEST [{{{10.89.2.0 ffffff00}} 10.89.2.1 }] false false true map[com.docker.compose.project:main-52 io.podman.compose.project:main-52] map[] map[driver:host-local]}" level=debug msg="Successfully loaded network main-53_default: &{main-53_default 5ee6887e518be593d012f6513096c5bb93fd59ded78de18d6d93d504b3596768 bridge podman7 2022-08-31 21:19:13.957820963 +0200 CEST [{{{10.89.6.0 ffffff00}} 10.89.6.1 }] false false true map[com.docker.compose.project:main-53 io.podman.compose.project:main-53] map[] map[driver:host-local]}" level=debug msg="Successfully loaded network main-54_default: &{main-54_default d6e1a09bab8e2eaba7d9d0c27b3a096c5ba4e52560167751a72a83f28b7e2230 bridge podman12 2022-09-01 21:50:59.004606414 +0200 CEST [{{{10.89.11.0 ffffff00}} 10.89.11.1 }] false false true map[com.docker.compose.project:main-54 io.podman.compose.project:main-54] map[] map[driver:host-local]}" level=debug msg="Successfully loaded network main-55_default: &{main-55_default f7b720420207c23e6000880ca073692be69f9f60991f8a5ca17b8fdbc1e0ec4c bridge podman16 2022-09-02 21:50:59.526202961 +0200 CEST [{{{10.89.15.0 ffffff00}} 10.89.15.1 }] false false true map[com.docker.compose.project:main-55 io.podman.compose.project:main-55] map[] map[driver:host-local]}" level=debug msg="Successfully loaded network main-60_default: &{main-60_default 9b5dad74edee6e2fb3c171e0ff8f4bcd681fb64a188e8ddc5ccc5476520e63ab bridge podman20 2022-09-19 21:19:15.238825176 +0200 CEST [{{{10.89.19.0 ffffff00}} 10.89.19.1 }] false false true map[com.docker.compose.project:main-60 io.podman.compose.project:main-60] map[] map[driver:host-local]}" level=debug msg="Successfully loaded network main-61_default: &{main-61_default 445bc7b513766677a6f8adde4b58375816967066a37883fdda3258de7d12111d bridge podman21 2022-09-20 21:19:21.637804986 +0200 CEST [{{{10.89.20.0 ffffff00}} 10.89.20.1 }] false false true map[com.docker.compose.project:main-61 io.podman.compose.project:main-61] map[] map[driver:host-local]}" level=debug msg="Successfully loaded network dbuser-change-2_default: &{dbuser-change-2_default cbd57d937d80992478ac442fc9034b249d44cc06467c539c3a251d170465c5c3 bridge podman6 2022-08-31 00:33:16.267735464 +0200 CEST [{{{10.89.5.0 ffffff00}} 10.89.5.1 }] false false true map[com.docker.compose.project:dbuser-change-2 io.podman.compose.project:dbuser-change-2] map[] map[driver:host-local]}" level=debug msg="Successfully loaded network dbuser-change-3_default: &{dbuser-change-3_default f7e6ba55f829b217c5d60c231d7ded87228c78f6a0dc6f69d80235c76a304696 bridge podman10 2022-09-01 00:33:16.59486874 +0200 CEST [{{{10.89.9.0 ffffff00}} 10.89.9.1 }] false false true map[com.docker.compose.project:dbuser-change-3 io.podman.compose.project:dbuser-change-3] map[] map[driver:host-local]}" level=debug msg="Successfully loaded network dbuser-change-4_default: &{dbuser-change-4_default 4d7ca37fda5792b9837a9de55e0df4849ea1d95d6eecaad17e7505ca8bab5c54 bridge podman15 2022-09-02 01:05:00.091509299 +0200 CEST [{{{10.89.14.0 ffffff00}} 10.89.14.1 }] false false true map[com.docker.compose.project:dbuser-change-4 io.podman.compose.project:dbuser-change-4] map[] map[driver:host-local]}" level=debug msg="Successfully loaded network test: &{test 412333a8c2aecefa7178dc6caba56740f57e6d0addbd5ee3c9102c7fe7826db8 bridge podman19 2022-09-19 10:47:47.677145423 +0200 CEST [{{{10.89.18.0 ffffff00}} 10.89.18.1 }] false false true map[] map[] map[driver:host-local]}" level=debug msg="Successfully loaded network test_foo: &{test_foo 86d7556762e7157e3cd07dd23f4aa19a2999a7b46e7d3b4a8252c2c6c8cd3de0 bridge podman11 2022-09-21 16:12:09.899547738 +0200 CEST [{{{10.89.10.0 ffffff00}} 10.89.10.1 }] false false true map[] map[] map[driver:host-local]}" level=debug msg="Successfully loaded 22 networks" level=debug msg="Allocated lock 87 for container {containerId}" level=debug msg="parsed reference into \"[overlay@/home/jenkins/.local/share/containers/storage+/run/user/1108/xdgruntime/containers]@1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b\"" level=debug msg="exporting opaque data as blob \"sha256:1287971ac0d366ba44ce5013773fdfcbebb65200ac7ee7a167a19fc2f6f3e50b\"" level=debug msg="Cached value indicated that overlay is not supported" level=debug msg="Check for idmapped mounts support " level=debug msg="Created container \"{containerId}\"" level=debug msg="Container \"{containerId}\" has work directory \"/home/jenkins/.local/share/containers/storage/overlay-containers/{containerId}/userdata\"" level=debug msg="Container \"{containerId}\" has run directory \"/run/user/1108/xdgruntime/containers/overlay-containers/{containerId}/userdata\"" level=debug msg="Not attaching to stdin" level=debug msg="Made network namespace at /run/user/1108/netns/netns-24da83ef-c5b6-7b9a-5d27-4099a21e1208 for container {containerId}" level=debug msg="The path of /etc/resolv.conf in the mount ns is \"/etc/resolv.conf\"" level=debug msg="[graphdriver] trying provided driver \"overlay\"" level=debug msg="Cached value indicated that overlay is supported" level=debug msg="Cached value indicated that overlay is supported" level=debug msg="Cached value indicated that metacopy is not being used" level=debug msg="backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" level=debug msg="Cached value indicated that volatile is being used" level=debug msg="overlay: mount_data=lowerdir=/home/jenkins/.local/share/containers/storage/overlay/l/KK7OCHWKB6ELXPVGO7ATFD6XEV:/home/jenkins/.local/share/containers/storage/overlay/l/47MRODP3P7LS5S6MZIRZKOZBI7:/home/jenkins/.local/share/containers/storage/overlay/l/E2XGRN6TUGWD3USS6K2ASNVG25:/home/jenkins/.local/share/containers/storage/overlay/l/G7OLM7Y73VVE7TXKPAZEJOYJBD:/home/jenkins/.local/share/containers/storage/overlay/l/WIJ2LTKW2TVXAUZY377NXZOA6K:/home/jenkins/.local/share/containers/storage/overlay/l/OCYKCOGXFWYBRGF63L7F77WQCS:/home/jenkins/.local/share/containers/storage/overlay/l/HITP76V5JUWFIEO4WD4NAUBO3B:/home/jenkins/.local/share/containers/storage/overlay/l/37MKERMYZTH44IRAXFXJN7KRSP:/home/jenkins/.local/share/containers/storage/overlay/l/YCC4WC6DMPWCFOBJE5NAXI2YZA:/home/jenkins/.local/share/containers/storage/overlay/l/BIUM4RYTKZUYTQ4GZWQHBHKO23:/home/jenkins/.local/share/containers/storage/overlay/l/YPYJR6HUTT6ZAL4QAXSPVCEY2B:/home/jenkins/.local/share/containers/storage/overlay/l/O5XRDVQQEH3KBRU32GVXEPYT5W:/home/jenkins/.local/share/containers/storage/overlay/l/LNDUZ5B7JJVMSVBZP22VGW6OI2:/home/jenkins/.local/share/containers/storage/overlay/l/MM5MPACLHWZM2G2YUBY7ZCRDN4:/home/jenkins/.local/share/containers/storage/overlay/l/MDLZVMMGJW7S7EUMV6OSJQ3D4S:/home/jenkins/.local/share/containers/storage/overlay/l/EOCCREDRGCLXMYJUFIANOVK4RD:/home/jenkins/.local/share/containers/storage/overlay/l/QHLYT44NSGTELQMQKYGSNNWTWW:/home/jenkins/.local/share/containers/storage/overlay/l/OS2N5REXJQ4JBMXNLQSPBMUZTA:/home/jenkins/.local/share/containers/storage/overlay/l/QCBHNIWEK4D3CC2EWRX3IUM7IK,upperdir=/home/jenkins/.local/share/containers/storage/overlay/aef7d141e2674d2086371e7d44e63b0811a7fbf362f33e6b8bc2986b18eee7ad/diff,workdir=/home/jenkins/.local/share/containers/storage/overlay/aef7d141e2674d2086371e7d44e63b0811a7fbf362f33e6b8bc2986b18eee7ad/work,,userxattr,volatile,context=\"system_u:object_r:container_file_t:s0:c498,c906\"" level=debug msg="Mounted container \"{containerId}\" at \"/home/jenkins/.local/share/containers/storage/overlay/aef7d141e2674d2086371e7d44e63b0811a7fbf362f33e6b8bc2986b18eee7ad/merged\"" level=debug msg="Created root filesystem for container {containerId} at /home/jenkins/.local/share/containers/storage/overlay/aef7d141e2674d2086371e7d44e63b0811a7fbf362f33e6b8bc2986b18eee7ad/merged" [DEBUG netavark::network::validation] "Validating network namespace..." [DEBUG netavark::commands::setup] "Setting up..." [INFO netavark::firewall] Using iptables firewall driver [DEBUG netavark::network::core_utils] Setting sysctl value for net.ipv4.ip_forward to 1 [DEBUG netavark::commands::setup] Setting up network podman with driver bridge [DEBUG netavark::network::core] Container veth name: "eth0" [DEBUG netavark::network::core] Brige name: "podman0" [DEBUG netavark::network::core] IP address for veth vector: [10.88.0.9/16] [DEBUG netavark::network::core] Gateway ip address vector: [10.88.0.1/16] [DEBUG netavark::network::core] Configured static up address for eth0 [DEBUG netavark::network::core] Container veth mac: "0e:07:45:d4:ae:09" [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-1D8721804F16F created on table nat [DEBUG netavark::firewall::varktables::helpers] rule -d 10.88.0.0/16 -j ACCEPT exists on table nat and chain NETAVARK-1D8721804F16F [DEBUG netavark::firewall::varktables::helpers] rule -d 10.88.0.0/16 -j ACCEPT created on table nat and chain NETAVARK-1D8721804F16F [DEBUG netavark::firewall::varktables::helpers] rule ! -d 224.0.0.0/4 -j MASQUERADE exists on table nat and chain NETAVARK-1D8721804F16F [DEBUG netavark::firewall::varktables::helpers] rule ! -d 224.0.0.0/4 -j MASQUERADE created on table nat and chain NETAVARK-1D8721804F16F [DEBUG netavark::firewall::varktables::helpers] rule -s 10.88.0.0/16 -j NETAVARK-1D8721804F16F exists on table nat and chain POSTROUTING [DEBUG netavark::firewall::varktables::helpers] rule -s 10.88.0.0/16 -j NETAVARK-1D8721804F16F created on table nat and chain POSTROUTING [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_FORWARD exists on table filter [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_FORWARD exists on table filter [DEBUG netavark::firewall::varktables::helpers] rule -d 10.88.0.0/16 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT exists on table filter and chain NETAVARK_FORWARD [DEBUG netavark::firewall::varktables::helpers] rule -d 10.88.0.0/16 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT created on table filter and chain NETAVARK_FORWARD [DEBUG netavark::firewall::varktables::helpers] rule -s 10.88.0.0/16 -j ACCEPT exists on table filter and chain NETAVARK_FORWARD [DEBUG netavark::firewall::varktables::helpers] rule -s 10.88.0.0/16 -j ACCEPT created on table filter and chain NETAVARK_FORWARD [DEBUG netavark::commands::setup] { "podman": StatusBlock { dns_search_domains: Some( [], ), dns_server_ips: Some( [], ), interfaces: Some( { "eth0": NetInterface { mac_address: "0e:07:45:d4:ae:09", subnets: Some( [ NetAddress { gateway: Some( 10.88.0.1, ), ipnet: 10.88.0.9/16, }, ], ), }, }, ), }, } [DEBUG netavark::commands::setup] "Setup complete" level=debug msg="Adding nameserver(s) from network status of '[]'" level=debug msg="Adding search domain(s) from network status of '[]'" level=debug msg="Not modifying container {containerId} /etc/passwd" level=debug msg="Not modifying container {containerId} /etc/group" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d" level=debug msg="Workdir \"/\" resolved to host path \"/home/jenkins/.local/share/containers/storage/overlay/aef7d141e2674d2086371e7d44e63b0811a7fbf362f33e6b8bc2986b18eee7ad/merged\"" level=debug msg="Created OCI spec for container {containerId} at /home/jenkins/.local/share/containers/storage/overlay-containers/{containerId}/userdata/config.json" level=debug msg="/usr/bin/conmon messages will be logged to syslog" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c {containerId} -u {containerId} -r /usr/bin/runc -b /home/jenkins/.local/share/containers/storage/overlay-containers/{containerId}/userdata -p /run/user/1108/xdgruntime/containers/overlay-containers/{containerId}/userdata/pidfile -n unruffled_chaplygin --exit-dir /run/user/1108/xdgruntime/libpod/tmp/exits --full-attach -l k8s-file:/home/jenkins/.local/share/containers/storage/overlay-containers/{containerId}/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/1108/xdgruntime/containers/overlay-containers/{containerId}/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/jenkins/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1108/xdgruntime/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1108/xdgruntime/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /home/jenkins/.local/share/containers/storage/volumes --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg {containerId}]" level=info msg="Failed to add conmon to cgroupfs sandbox cgroup: error creating cgroup for pids: mkdir /sys/fs/cgroup/pids/conmon: permission denied" [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied level=debug msg="Received: 491742" level=info msg="Got Conmon PID as {PID}" level=debug msg="Created container {containerId} in OCI runtime" level=debug msg="Attaching to container {containerId}" level=debug msg="Starting container {containerId} with command [curl -m 2 https://github.com]" level=debug msg="Started container {containerId}" level=info msg="Received shutdown.Stop(), terminating!" PID=491646 level=debug msg="Enabling signal proxying" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:20 --:--:-- 0 curl: (28) Resolving timed out after 2000 milliseconds level=debug msg="Checking if container {containerId} should restart" level=debug msg="Removing container {containerId}" level=debug msg="Cleaning up container {containerId}" level=debug msg="Tearing down network namespace at /run/user/1108/netns/netns-24da83ef-c5b6-7b9a-5d27-4099a21e1208 for container {containerId}" level=debug msg="The path of /etc/resolv.conf in the mount ns is \"/etc/resolv.conf\"" [DEBUG netavark::commands::teardown] "Tearing down.." [INFO netavark::firewall] Using iptables firewall driver [DEBUG netavark::commands::teardown] Setting up network podman with driver bridge [DEBUG netavark::network::core_utils] bridge has 1 connected interfaces [DEBUG netavark::network::core] Container veth name being removed: "eth0" [DEBUG netavark::network::core] Container veth removed: "eth0" [DEBUG netavark::commands::teardown] "Teardown complete" level=debug msg="Successfully cleaned up container {containerId}" level=debug msg="Unmounted container \"{containerId}\"" level=debug msg="Removing all exec sessions for container {containerId}" level=debug msg="Container {containerId} storage is already unmounted, skipping..." level=debug msg="Called run.PersistentPostRunE(podman run --log-level=debug --rm --network podman curlimages/curl curl -m 2 https://github.com)" ```

Output of podman version:

Client:       Podman Engine
Version:      4.1.1
API Version:  4.1.1
Go Version:   go1.17.7
Built:        Mon Jul 11 16:56:53 2022
OS/Arch:      linux/amd64

Output of podman info:

host:
  arch: amd64
  buildahVersion: 1.26.2
  cgroupControllers: []
  cgroupManager: cgroupfs
  cgroupVersion: v1
  conmon:
    package: conmon-2.1.2-2.module+el8.6.0+15917+093ca6f8.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.2, commit: 8c4f33ac0dcf558874b453d5027028b18d1502db'
  cpuUtilization:
    idlePercent: 96.44
    systemPercent: 1.56
    userPercent: 2
  cpus: 8
  distribution:
    distribution: '"rhel"'
    version: "8.6"
  eventLogger: file
  hostname: xxxxx
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1108
      size: 1
    - container_id: 1
      host_id: 493216
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1108
      size: 1
    - container_id: 1
      host_id: 493216
      size: 65536
  kernel: 4.18.0-372.19.1.el8_6.x86_64
  linkmode: dynamic
  logDriver: k8s-file
  memFree: 23616708608
  memTotal: 33385299968
  networkBackend: netavark
  ociRuntime:
    name: runc
    package: runc-1.1.3-2.module+el8.6.0+15917+093ca6f8.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.1.3
      spec: 1.0.2-dev
      go: go1.17.7
      libseccomp: 2.5.2
  os: linux
  remoteSocket:
    path: /run/user/1108/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-2.module+el8.6.0+15917+093ca6f8.x86_64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.4.0
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.2
  swapFree: 4130603008
  swapTotal: 4294963200
  uptime: 660h 41m 18.53s (Approximately 27.50 days)
plugins:
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  search:
  - xxxxx
  - xxxxx
  - xxxxx
  - xxxxx
  - xxxxx
store:
  configFile: /home/jenkins/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/jenkins/.local/share/containers/storage
  graphRootAllocated: 49405448192
  graphRootUsed: 14981369856
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 103
  runRoot: /run/user/1108/xdgruntime/containers
  volumePath: /home/jenkins/.local/share/containers/storage/volumes
version:
  APIVersion: 4.1.1
  Built: 1657551413
  BuiltTime: Mon Jul 11 16:56:53 2022
  GitCommit: ""
  GoVersion: go1.17.7
  Os: linux
  OsArch: linux/amd64
  Version: 4.1.1

Package info (e.g. output of rpm -q podman or apt list podman):

podman-4.1.1-2.module+el8.6.0+15917+093ca6f8.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)

Yes, I checked the Podman Troubleshooting Guide No, I didn't test 4.2.1 because there's no RPM in the RHEL8 repositories yet, but from the changelog it's unlikely that this issue was fixed.

Additional environment details (AWS, VirtualBox, physical, etc.):

Some VM, I guess it's irrelevant.

Luap99 commented 2 years ago

Interestingly, it only happened when specifying --network, even if I specify --network podman which I assumed to be the default network if none is specified.

--network slirp4nents is the default as rootless so that makes sense

Once that happens do all containers fail to connect? Please check the outout from podman unshare --rootless-netns ip addr and check if the slirp4nents process is running.

micheljung commented 2 years ago

As it happened:

podman run --rm --name xxxxx_fix-certificate-issue-11_systemtest --network xxxxx_fix-certificate-issue-11_default docker.example.com/xxxx-jenkins/build-container-rhel8-java11 bash -c './gradlew test'
Exception in thread "main" java.net.UnknownHostException: nexus.example.com
$ podman unshare --rootless-netns ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
24: podman3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 2a:50:71:09:83:5a brd ff:ff:ff:ff:ff:ff
    inet 10.89.2.1/24 brd 10.89.2.255 scope global podman3
       valid_lft forever preferred_lft forever
    inet6 fe80::90e6:efff:fe98:a375/64 scope link
       valid_lft forever preferred_lft forever
25: vethb35b7269@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman3 state UP group default qlen 1000
    link/ether 36:85:46:ed:e7:28 brd ff:ff:ff:ff:ff:ff link-netnsid 11
    inet6 fe80::3485:46ff:feed:e728/64 scope link
       valid_lft forever preferred_lft forever
26: veth3b522327@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman3 state UP group default qlen 1000
    link/ether ce:db:33:d0:b5:a7 brd ff:ff:ff:ff:ff:ff link-netnsid 8
    inet6 fe80::ccdb:33ff:fed0:b5a7/64 scope link
       valid_lft forever preferred_lft forever
27: veth8d9d5929@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman3 state UP group default qlen 1000
    link/ether 8e:24:83:3c:95:e2 brd ff:ff:ff:ff:ff:ff link-netnsid 9
    inet6 fe80::8c24:83ff:fe3c:95e2/64 scope link
       valid_lft forever preferred_lft forever
28: vethe411db57@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman3 state UP group default qlen 1000
    link/ether a2:b9:52:3c:a3:ba brd ff:ff:ff:ff:ff:ff link-netnsid 10
    inet6 fe80::a0b9:52ff:fe3c:a3ba/64 scope link
       valid_lft forever preferred_lft forever
30: vethf1172a83@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman3 state UP group default qlen 1000
    link/ether e6:fb:d0:55:a5:71 brd ff:ff:ff:ff:ff:ff link-netnsid 12
    inet6 fe80::e4fb:d0ff:fe55:a571/64 scope link
       valid_lft forever preferred_lft forever
31: veth36b53aaa@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman3 state UP group default qlen 1000
    link/ether 2a:50:71:09:83:5a brd ff:ff:ff:ff:ff:ff link-netnsid 13
    inet6 fe80::2850:71ff:fe09:835a/64 scope link
       valid_lft forever preferred_lft forever
32: veth7314528a@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman3 state UP group default qlen 1000
    link/ether ee:da:2f:e4:02:c0 brd ff:ff:ff:ff:ff:ff link-netnsid 14
    inet6 fe80::ecda:2fff:fee4:2c0/64 scope link
       valid_lft forever preferred_lft forever
33: vethba0ad71@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman3 state UP group default qlen 1000
    link/ether 4e:1d:90:86:3c:d2 brd ff:ff:ff:ff:ff:ff link-netnsid 16
    inet6 fe80::4c1d:90ff:fe86:3cd2/64 scope link
       valid_lft forever preferred_lft forever
34: podman2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0e:84:89:57:ee:16 brd ff:ff:ff:ff:ff:ff
    inet 10.89.1.1/24 brd 10.89.1.255 scope global podman2
       valid_lft forever preferred_lft forever
    inet6 fe80::3cb7:e2ff:fe04:7a3f/64 scope link
       valid_lft forever preferred_lft forever
35: vetha47c1814@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman2 state UP group default qlen 1000
    link/ether ee:27:32:27:0c:06 brd ff:ff:ff:ff:ff:ff link-netnsid 17
    inet6 fe80::ec27:32ff:fe27:c06/64 scope link
       valid_lft forever preferred_lft forever
36: vethf6c40f9a@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman2 state UP group default qlen 1000
    link/ether 5e:c2:63:80:28:d8 brd ff:ff:ff:ff:ff:ff link-netnsid 6
    inet6 fe80::5cc2:63ff:fe80:28d8/64 scope link
       valid_lft forever preferred_lft forever
37: vethbd7d96ee@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman2 state UP group default qlen 1000
    link/ether fa:0f:87:92:64:72 brd ff:ff:ff:ff:ff:ff link-netnsid 7
    inet6 fe80::f80f:87ff:fe92:6472/64 scope link
       valid_lft forever preferred_lft forever
38: vethc62a9409@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman2 state UP group default qlen 1000
    link/ether 0e:84:89:57:ee:16 brd ff:ff:ff:ff:ff:ff link-netnsid 5
    inet6 fe80::c84:89ff:fe57:ee16/64 scope link
       valid_lft forever preferred_lft forever
40: veth4035563e@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman2 state UP group default qlen 1000
    link/ether c2:7b:52:d3:87:6b brd ff:ff:ff:ff:ff:ff link-netnsid 15
    inet6 fe80::c07b:52ff:fed3:876b/64 scope link
       valid_lft forever preferred_lft forever
41: veth7caeeefd@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman2 state UP group default qlen 1000
    link/ether 3e:0f:04:ab:e8:d0 brd ff:ff:ff:ff:ff:ff link-netnsid 18
    inet6 fe80::3c0f:4ff:feab:e8d0/64 scope link
       valid_lft forever preferred_lft forever
42: veth915b4498@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman2 state UP group default qlen 1000
    link/ether ba:e7:a2:b7:1b:3c brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet6 fe80::b8e7:a2ff:feb7:1b3c/64 scope link
       valid_lft forever preferred_lft forever
43: vethc8c00999@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman2 state UP group default qlen 1000
    link/ether 2a:d8:4d:4f:0f:82 brd ff:ff:ff:ff:ff:ff link-netnsid 4
    inet6 fe80::28d8:4dff:fe4f:f82/64 scope link
       valid_lft forever preferred_lft forever

Sometimes, we also face the issue that one container says "no route to host" to another container. Maybe that's a different issue, maybe it's not.

micheljung commented 2 years ago

Sometimes it works, sometimes it can't resolve external hosts, sometimes it can not resolve or reach internal hosts.

I'm just having another case of "Could not resolve host" between containers:

$ podman network rm -f test_xxxxx
time="2022-09-26T10:24:02+02:00" level=error msg="Failed to kill slirp4netns process: no such process"
test_xxxxx

$ podman network create test_xxxxx
test_xxxxx

$ podman run -d --rm --network test_xxxxx --name nginx_xxxxx library/nginx:1.21.6-alpine
0238222e882200d59af3021ccef78d0897c471216cd6c1f37fedb60565ddf779

$ podman network ls
NETWORK ID    NAME                                    DRIVER
aaa234db3d66  xxxxx_fix-certificate-issue-13_default  bridge
46d620e9d8bb  yyyyy_local-ports-6_default             bridge
2f259bab93aa  podman                                  bridge
4d7fe80d83f9  test_xxxxx                              bridge

$ podman run --rm --network test_xxxxx nginx:1.21.6-alpine curl http://nginx_xxxxx
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (6) Could not resolve host: nginx_xxxxx

$ podman container ls
CONTAINER ID  IMAGE                                           COMMAND               CREATED       STATUS            PORTS       NAMES
0238222e8822  docker.example.com/nginx:1.21.6-alpine          nginx -g daemon o...  1 second ago  Up 2 seconds ago              nginx_xxxxx

$ podman unshare --rootless-netns ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: tap0: <BROADCAST,UP,LOWER_UP> mtu 65520 qdisc fq_codel state UNKNOWN group default qlen 1000
    link/ether de:ac:3f:06:7c:77 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.100/24 brd 10.0.2.255 scope global tap0
       valid_lft forever preferred_lft forever
    inet6 fd00::dcac:3fff:fe06:7c77/64 scope global tentative dynamic mngtmpaddr 
       valid_lft 86400sec preferred_lft 14400sec
    inet6 fe80::dcac:3fff:fe06:7c77/64 scope link 
       valid_lft forever preferred_lft forever
3: podman3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 3e:f9:83:8b:e4:95 brd ff:ff:ff:ff:ff:ff
    inet 10.89.2.1/24 brd 10.89.2.255 scope global podman3
       valid_lft forever preferred_lft forever
    inet6 fe80::80cc:72ff:fe59:898b/64 scope link tentative 
       valid_lft forever preferred_lft forever
4: veth8be11969@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman3 state UP group default qlen 1000
    link/ether 3e:f9:83:8b:e4:95 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::3cf9:83ff:fe8b:e495/64 scope link 
       valid_lft forever preferred_lft forever

This was repeatable until I executed podman network prune -f and ran it again, then it worked. However, this one still failed:

$ podman run --rm --network test_xxxxx xxxx-jenkins/build-container-rhel8-java11 curl https://nexus.example.com/
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:--  0:00:43 --:--:--     0curl: (6) Could not resolve host: nexus.example.com

After running podman stop -a && podman container prune -f && podman network prune -f, it worked again.

Luap99 commented 2 years ago

In your fist comment the tap0 interface is missing which means that the slirp4nents process was killed or crashed. Looks like you use jenkins, I remember problems with jenkins just killing our processes. For the second one I assume the aardvark-dns process was killed, likely the same problem.

I suggest you monitor those processes, it is very likely that something is killing them.

micheljung commented 2 years ago

Related: https://github.com/containers/podman/issues/15390

micheljung commented 2 years ago

Thank you very much @Luap99 for your valuable input!

What I don't understand yet is in which scenario Podman decides to start slirp4netns. When creating the first container? And does it then assume that the process is already running if one container is running?

One might expect that the process is started whenever it's not running.

Luap99 commented 2 years ago

Podman creates the rootless netns when it does not exists (including starting slirp4netns). As long as the rootless netns is still there (bind mount under XDG_RUNTIME_DIR/netns/rootless-netns...) we expect that everything is working. You have to stop all containers with a bridge network, only then podman cleanups the netns (and kills slirp4netns).

micheljung commented 2 years ago

Thanks again, I will look into it.

I tried starting podman with XDG_RUNTIME_DIR=$(realpath ./podman/run) but this gets me failed to mount runtime directory for rootless netns: no such file or directory or Failed to mount runtime directory for rootless netns

Maybe I need to ask this: Is there a recommended way to use Podman in CI (Jenkins) that completely isolates the containers, networks, slirp4netns process etc. from each other? (and if yes, how? ;-))

Luap99 commented 2 years ago

If you run --network slirp4netns you will have one slirp process per container, in this case the containers cannot communicate with each other than via forwarded host ports.

micheljung commented 2 years ago

unfortunately, that's not a solution because the whole point is to have a separate network with N services (podman compose) and no open host ports as they are problematic on a CI system

github-actions[bot] commented 2 years ago

A friendly reminder that this issue had no activity for 30 days.

almereyda commented 1 year ago

In https://github.com/containers/podman/issues/7816#issuecomment-1250630813 we had been pointed at https://github.com/jcarrano/wg-podman

Does it seem feasible to adapt this example with a userspace Wireguard ¹ implementation ², if Wireguard Kernel support is not available in CI, and try again?

github-actions[bot] commented 1 year ago

A friendly reminder that this issue had no activity for 30 days.

rhatdan commented 1 year ago

@Luap99 I think this is waiting for a response from you.

Luap99 commented 1 year ago

There is nothing we can do in podman to prevent this. TL;DR do not kill the slirp4netns process.