containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
23.18k stars 2.36k forks source link

Podman does not update port forwards after container restart to new container IP #23832

Closed FrankyBoy closed 2 weeks ago

FrankyBoy commented 2 weeks ago

Issue Description

When shutting down a container and later starting it again, the container gets a new IP address. However it does not seem that podman updates the forwarded port to the new address. So far I've only found a system restart to resolve the problem (until I shut down the container again, of course).

shortened compose.yaml file looks like this:

version: "3.8"

services:
  backend:
    image: [[[redacted]]]
    restart: always
    ports:
      - "${BACKEND_PORT}:5000"

  textservice:
    image: [[[redacted]]]
    restart: always
    ports:
      - "${TEXTSERVICE_PORT}:8080"

Indications / possible related problems.

  1. podman compose down yields netavark errors: (the same is reproducible with single podman stop as well, but for the full picture...)

    # podman compose down
    >>>> Executing external compose provider "/usr/bin/podman-compose". Please see podman-compose(1) for how to disable this message. <<<<
    
    ERRO[0000] Unable to clean up network for container 
    2e6ee3cee9e57e32d82b13a6b295b47017d17963437c7ed602b7cf06c22bf092: "netavark: open container netns: open /run/user/0/netns/netns-b63b3f27-98fe-f8a4-b83e-ce9aeedd0580: IO error: No such file or directory (os error 2)"
    composeenv_backend_1
    ERRO[0000] Unable to clean up network for container 
    2fa0393bbb0c6ca08c46ffb31cc4b68f445bac8b741a97e5c8eca4e251621c6d: "netavark: open container netns: open /run/user/0/netns/netns-8347ed59-6c73-784e-730d-4fcce55bbab0: IO error: No such file or directory (os error 2)"
    composeenv_textservice_1
    composeenv_textservice_1
    composeenv_backend_1
    27a29bdb6635fd76036e7b3a736eb0d1d6ffecf96d61883bc1bdb209a81ed507
  2. network inspect shows changed IP address of containers
  3. wget http://localhost:5000 fails with "no route to host", while wget http://[container-ip]:5000 returns the expected response

Steps to reproduce the issue

Steps to reproduce the issue

  1. start container
  2. stop container
  3. start container again

Describe the results you received

Port forwarding broken / No route to host.

Describe the results you expected

Port forwarding continues to work.

podman info output

host:
  arch: amd64
  buildahVersion: 1.37.2
  cgroupControllers:
  - cpu
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon_100:2.1.12-1_amd64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.12, commit: e21e7c85b7637e622f21c57675bf1154fc8b1866'
  cpuUtilization:
    idlePercent: 99.91
    systemPercent: 0.05
    userPercent: 0.04
  cpus: 16
  databaseBackend: sqlite
  distribution:
    codename: bookworm
    distribution: debian
    version: "12"
  eventLogger: journald
  freeLocks: 2030
  hostname: [[[redacted]]]
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 6.8.4-2-pve
  linkmode: dynamic
  logDriver: journald
  memFree: 68099674112
  memTotal: 68719476736
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: podman-aardvark-dns_100:1.12.1-1_amd64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.12.1
    package: podman-netavark_100:1.12.2-1_amd64
    path: /usr/libexec/podman/netavark
    version: netavark 1.12.2
  ociRuntime:
    name: runc
    package: cri-o-runc_100:1.1.13-1_amd64
    path: /usr/lib/cri-o-runc/sbin/runc
    version: |-
      runc version unknown
      spec: 1.0.2-dev
      go: go1.23.0
      libseccomp: 2.5.4
  os: linux
  pasta:
    executable: ""
    package: ""
    version: ""
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  rootlessNetworkCmd: pasta
  security:
    apparmorEnabled: false
    capabilities: CAP_AUDIT_WRITE,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_MKNOD,CAP_NET_BIND_SERVICE,CAP_NET_RAW,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 8589930496
  swapTotal: 8589930496
  uptime: 0h 32m 25.00s
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - docker.io
store:
  configFile: /usr/share/containers/storage.conf
  containerStore:
    number: 14
    paused: 0
    running: 2
    stopped: 12
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/lib/containers/storage
  graphRootAllocated: 527295578112
  graphRootUsed: 7807967232
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 14
  runRoot: /run/containers/storage
  transientStore: false
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 5.2.2
  Built: 0
  BuiltTime: Thu Jan  1 00:00:00 1970
  GitCommit: ""
  GoVersion: go1.23.0
  Os: linux
  OsArch: linux/amd64
  Version: 5.2.2

Podman in a container

Yes

Privileged Or Rootless

Privileged

Upstream Latest Release

Yes

Additional environment details

Additional environment details

Additional information

Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting

Luap99 commented 2 weeks ago

Well this seems to be the issue:

ERRO[0000] Unable to clean up network for container 
2e6ee3cee9e57e32d82b13a6b295b47017d17963437c7ed602b7cf06c22bf092: "netavark: open container netns: open /run/user/0/netns/netns-b63b3f27-98fe-f8a4-b83e-ce9aeedd0580: IO error: No such file or directory (os error 2)"

If we are unable to cleanup then we cannot remove the old firewall rules. How do you have your system configured, /run/user/0/ looks wrong as root is uses /run/netns by default.

Are you using podman-compose? Note that we do not support podman-compose directly, can you reproduce this with simple podman commands just creating a single container and stopping it? And please add --log-level debug to the podman commands and provide the full output

FrankyBoy commented 2 weeks ago

Hi, ah that makes sense. I didn't find anything useful with the error though, only defects that were addressed by some PRs some time/versions ago already.

Sorry not sure what I can answer regarding configuration ... it's a debian 12 running inside proxmox if that helps any. Podman itself is run as root. Container is set to unprivileged and has nesting enabled. The podman package itself vomes from the alvistack repository (https://downloadcontent.opensuse.org/repositories/home:/alvistack/Debian_12). Don't recall making any custom configurations after that (env was set up some time ago already and we just now start using it more). If I can look up any thing more specific please let me know.

I can reproduce the issue with podman stop [container] as well ... here's the debug output:

# podman stop composeenv_textservice_1 --log-level debug
INFO[0000] podman filtering at log level debug
DEBU[0000] Called stop.PersistentPreRunE(podman stop composeenv_textservice_1 --log-level debug)
DEBU[0000] Using conmon: "/usr/bin/conmon"
INFO[0000] Using sqlite as database backend
DEBU[0000] Using graph driver
DEBU[0000] Using graph root /var/lib/containers/storage
DEBU[0000] Using run root /run/containers/storage
DEBU[0000] Using static dir /var/lib/containers/storage/libpod
DEBU[0000] Using tmp dir /run/libpod
DEBU[0000] Using volume path /var/lib/containers/storage/volumes
DEBU[0000] Using transient store: false
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that metacopy is not being used
DEBU[0000] Cached value indicated that native-diff is usable
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
INFO[0000] [graphdriver] using prior storage driver: overlay
DEBU[0000] Initializing event backend journald
DEBU[0000] Configured OCI runtime crun initialization failed: no valid executable found for OCI runtime crun: invalid argument
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument
DEBU[0000] Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument
DEBU[0000] Using OCI runtime "/usr/lib/cri-o-runc/sbin/runc"
INFO[0000] Setting parallel job count to 49
DEBU[0000] Starting parallel job on container 243f7881dcf62fb54d84a5db0e87e5bae250c6fb662334525ef01cbc0d1d4e09
DEBU[0000] Stopping ctr 243f7881dcf62fb54d84a5db0e87e5bae250c6fb662334525ef01cbc0d1d4e09 (timeout 10)
DEBU[0000] Stopping container 243f7881dcf62fb54d84a5db0e87e5bae250c6fb662334525ef01cbc0d1d4e09 (PID 581)
DEBU[0000] Sending signal 15 to container 243f7881dcf62fb54d84a5db0e87e5bae250c6fb662334525ef01cbc0d1d4e09
DEBU[0000] Container "243f7881dcf62fb54d84a5db0e87e5bae250c6fb662334525ef01cbc0d1d4e09" state changed from "stopping" to "stopped" while waiting for it to be stopped: discontinuing stop procedure as another process interfered
DEBU[0000] Cleaning up container 243f7881dcf62fb54d84a5db0e87e5bae250c6fb662334525ef01cbc0d1d4e09
DEBU[0000] Tearing down network namespace at /run/user/0/netns/netns-3648d680-bba4-9cac-f4b5-b63e0839098a for container 243f7881dcf62fb54d84a5db0e87e5bae250c6fb662334525ef01cbc0d1d4e09
DEBU[0000] Successfully loaded network composeenv_default: &{composeenv_default 7b5cdd895d04f742e7fa90da8ead9098eb4a141f1ef73f15b47aef30cba7c029 bridge podman5 2024-09-02 10:36:37.280487192 +0000 UTC [{{{10.89.4.0 ffffff00}} 10.89.4.1 <nil>}] [] false false true [] map[com.docker.compose.project:composeenv io.podman.compose.project:composeenv] map[] map[driver:host-local]}
DEBU[0000] Successfully loaded network gip-server-main-090-01_default: &{gip-server-main-090-01_default 8e3d2b1edb4b4a4e1eae8d001673b00270fd18a8830c098e453a5614e14f4fb9 bridge podman2 2024-08-21 10:50:09.441145834 +0000 UTC [{{{10.89.1.0 ffffff00}} 10.89.1.1 <nil>}] [] false false true [] map[com.docker.compose.project:gip-server-main-090-01 io.podman.compose.project:gip-server-main-090-01] map[] map[driver:host-local]}
DEBU[0000] Successfully loaded network gip-server-main-090-02_default: &{gip-server-main-090-02_default df49c9dda7a4805c842daabab29d97b55d457406bd1c8e7ca5da3fd363ecd751 bridge podman3 2024-08-22 10:04:02.518842701 +0000 UTC [{{{10.89.2.0 ffffff00}} 10.89.2.1 <nil>}] [] false false true [] map[com.docker.compose.project:gip-server-main-090-02 io.podman.compose.project:gip-server-main-090-02] map[] map[driver:host-local]}
DEBU[0000] Successfully loaded network gip-server-main-090-03_default: &{gip-server-main-090-03_default 1e9e711c5699e839b420a66b422b97b36e95361f0b0fbccf031d8d1d13cdf109 bridge podman4 2024-08-28 13:07:45.975049378 +0000 UTC [{{{10.89.3.0 ffffff00}} 10.89.3.1 <nil>}] [] false false true [] map[com.docker.compose.project:gip-server-main-090-03 io.podman.compose.project:gip-server-main-090-03] map[] map[driver:host-local]}
DEBU[0000] Successfully loaded 5 networks
[DEBUG netavark::commands::teardown] "Tearing down.."
[INFO  netavark::firewall] Using iptables firewall driver
ERRO[0000] Unable to clean up network for container 243f7881dcf62fb54d84a5db0e87e5bae250c6fb662334525ef01cbc0d1d4e09: "netavark (exit code 1): open container netns: open /run/user/0/netns/netns-3648d680-bba4-9cac-f4b5-b63e0839098a: IO error: No such file or directory (os error 2)"
DEBU[0000] Successfully cleaned up container 243f7881dcf62fb54d84a5db0e87e5bae250c6fb662334525ef01cbc0d1d4e09
DEBU[0000] Unmounted container "243f7881dcf62fb54d84a5db0e87e5bae250c6fb662334525ef01cbc0d1d4e09"
composeenv_textservice_1
DEBU[0000] Called stop.PersistentPostRunE(podman stop composeenv_textservice_1 --log-level debug)
DEBU[0000] Shutting down engines
INFO[0000] Received shutdown.Stop(), terminating!        PID=1079
Luap99 commented 2 weeks ago

please show the debug log of a starting container

FrankyBoy commented 2 weeks ago

Here you go :) I removed the network and pod first and then go the commands from compose via verbose and dry-run... Just skipped over the volume checks because who cares ...

# podman pod create --log-level debug --name=pod_composeenv --infra=false --share=
INFO[0000] podman filtering at log level debug
DEBU[0000] Called create.PersistentPreRunE(podman pod create --log-level debug --name=pod_composeenv --infra=false --share=)
DEBU[0000] Using conmon: "/usr/bin/conmon"
INFO[0000] Using sqlite as database backend
DEBU[0000] Using graph driver
DEBU[0000] Using graph root /var/lib/containers/storage
DEBU[0000] Using run root /run/containers/storage
DEBU[0000] Using static dir /var/lib/containers/storage/libpod
DEBU[0000] Using tmp dir /run/libpod
DEBU[0000] Using volume path /var/lib/containers/storage/volumes
DEBU[0000] Using transient store: false
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that metacopy is not being used
DEBU[0000] Cached value indicated that native-diff is usable
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
INFO[0000] [graphdriver] using prior storage driver: overlay
DEBU[0000] Initializing event backend journald
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
DEBU[0000] Configured OCI runtime crun initialization failed: no valid executable found for OCI runtime crun: invalid argument
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument
DEBU[0000] Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument
DEBU[0000] Using OCI runtime "/usr/lib/cri-o-runc/sbin/runc"
INFO[0000] Setting parallel job count to 49
DEBU[0000] Not creating an infra container
DEBU[0000] No networking because the infra container is missing
DEBU[0000] Created cgroup path machine.slice/machine-libpod_pod_5f65e30ce958c048de945303fe5b9a4658c724596a3342277c4ea69f709807d1.slice for parent machine.slice and name libpod_pod_5f65e30ce958c048de945303fe5b9a4658c724596a3342277c4ea69f709807d1
DEBU[0000] Created cgroup machine.slice/machine-libpod_pod_5f65e30ce958c048de945303fe5b9a4658c724596a3342277c4ea69f709807d1.slice
DEBU[0000] Got pod cgroup as machine.slice/machine-libpod_pod_5f65e30ce958c048de945303fe5b9a4658c724596a3342277c4ea69f709807d1.slice
5f65e30ce958c048de945303fe5b9a4658c724596a3342277c4ea69f709807d1
DEBU[0000] Called create.PersistentPostRunE(podman pod create --log-level debug --name=pod_composeenv --infra=false --share=)
DEBU[0000] Shutting down engines
INFO[0000] Received shutdown.Stop(), terminating!        PID=2212

# podman network create --log-level debug  --label io.podman.compose.project=composeenv --label com.docker.compose.project=composeenv composeenv_default
INFO[0000] podman filtering at log level debug
DEBU[0000] Called create.PersistentPreRunE(podman network create --log-level debug --label io.podman.compose.project=composeenv --label com.docker.compose.project=composeenv composeenv_default)
DEBU[0000] Using conmon: "/usr/bin/conmon"
INFO[0000] Using sqlite as database backend
DEBU[0000] Using graph driver
DEBU[0000] Using graph root /var/lib/containers/storage
DEBU[0000] Using run root /run/containers/storage
DEBU[0000] Using static dir /var/lib/containers/storage/libpod
DEBU[0000] Using tmp dir /run/libpod
DEBU[0000] Using volume path /var/lib/containers/storage/volumes
DEBU[0000] Using transient store: false
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that metacopy is not being used
DEBU[0000] Cached value indicated that native-diff is usable
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
INFO[0000] [graphdriver] using prior storage driver: overlay
DEBU[0000] Initializing event backend journald
DEBU[0000] Configured OCI runtime crun initialization failed: no valid executable found for OCI runtime crun: invalid argument
DEBU[0000] Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument
DEBU[0000] Using OCI runtime "/usr/lib/cri-o-runc/sbin/runc"
INFO[0000] Setting parallel job count to 49
DEBU[0000] Successfully loaded network otherproj-server-main-090-01_default: &{otherproj-server-main-090-01_default 8e3d2b1edb4b4a4e1eae8d001673b00270fd18a8830c098e453a5614e14f4fb9 bridge podman2 2024-08-21 10:50:09.441145834 +0000 UTC [{{{10.89.1.0 ffffff00}} 10.89.1.1 <nil>}] [] false false true [] map[com.docker.compose.project:otherproj-server-main-090-01 io.podman.compose.project:otherproj-server-main-090-01] map[] map[driver:host-local]}
DEBU[0000] Successfully loaded network otherproj-server-main-090-02_default: &{otherproj-server-main-090-02_default df49c9dda7a4805c842daabab29d97b55d457406bd1c8e7ca5da3fd363ecd751 bridge podman3 2024-08-22 10:04:02.518842701 +0000 UTC [{{{10.89.2.0 ffffff00}} 10.89.2.1 <nil>}] [] false false true [] map[com.docker.compose.project:otherproj-server-main-090-02 io.podman.compose.project:otherproj-server-main-090-02] map[] map[driver:host-local]}
DEBU[0000] Successfully loaded network otherproj-server-main-090-03_default: &{otherproj-server-main-090-03_default 1e9e711c5699e839b420a66b422b97b36e95361f0b0fbccf031d8d1d13cdf109 bridge podman4 2024-08-28 13:07:45.975049378 +0000 UTC [{{{10.89.3.0 ffffff00}} 10.89.3.1 <nil>}] [] false false true [] map[com.docker.compose.project:otherproj-server-main-090-03 io.podman.compose.project:otherproj-server-main-090-03] map[] map[driver:host-local]}
DEBU[0000] Successfully loaded 4 networks
DEBU[0000] found free device name podman1
DEBU[0000] found free ipv4 network subnet 10.89.0.0/24
composeenv_default
DEBU[0000] Called create.PersistentPostRunE(podman network create --log-level debug --label io.podman.compose.project=composeenv --label com.docker.compose.project=composeenv composeenv_default)
DEBU[0000] Shutting down engines
INFO[0000] Received shutdown.Stop(), terminating!        PID=2260

# podman create --log-level debug --name=composeenv_backend_1 --pod=pod_composeenv --label io.podman.compose.config-hash=b92f1a68a9fd51a1885046ca76639c6c1acad12c2774eb6d70fc2f7f8bce06cf --label io.podman.compose.project=composeenv --label io.podman.compose.version=1.2.0 --label PODMAN_SYSTEMD_UNIT=podman-compose@composeenv.service --label com.docker.compose.project=composeenv --label com.docker.compose.project.working_dir=/root/containers/composeenv --label com.docker.compose.project.config_files=compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=backend -e ASPNETCORE_URLS=http://*:5000 -e DOTNET_ENVIRONMENT=Docker.Test -e ASPNETCORE_ENVIRONMENT=Docker.Test --network=composeenv_default --network-alias=backend --secret ConnectionStrings__GeoServicesDb -p 5000:5000 --restart always some.gitlab.com/my/backend
INFO[0000] podman filtering at log level debug
DEBU[0000] Called create.PersistentPreRunE(podman create --log-level debug --name=composeenv_backend_1 --pod=pod_composeenv --label io.podman.compose.config-hash=b92f1a68a9fd51a1885046ca76639c6c1acad12c2774eb6d70fc2f7f8bce06cf --label io.podman.compose.project=composeenv --label io.podman.compose.version=1.2.0 --label PODMAN_SYSTEMD_UNIT=podman-compose@composeenv.service --label com.docker.compose.project=composeenv --label com.docker.compose.project.working_dir=/root/containers/composeenv --label com.docker.compose.project.config_files=compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=backend -e ASPNETCORE_URLS=http://*:5000 -e DOTNET_ENVIRONMENT=Docker.Test -e ASPNETCORE_ENVIRONMENT=Docker.Test --network=composeenv_default --network-alias=backend --secret ConnectionStrings__GeoServicesDb -p 5000:5000 --restart always some.gitlab.com/my/backend)
DEBU[0000] Using conmon: "/usr/bin/conmon"
INFO[0000] Using sqlite as database backend
DEBU[0000] Using graph driver
DEBU[0000] Using graph root /var/lib/containers/storage
DEBU[0000] Using run root /run/containers/storage
DEBU[0000] Using static dir /var/lib/containers/storage/libpod
DEBU[0000] Using tmp dir /run/libpod
DEBU[0000] Using volume path /var/lib/containers/storage/volumes
DEBU[0000] Using transient store: false
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that metacopy is not being used
DEBU[0000] Cached value indicated that native-diff is usable
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
INFO[0000] [graphdriver] using prior storage driver: overlay
DEBU[0000] Initializing event backend journald
DEBU[0000] Configured OCI runtime crun initialization failed: no valid executable found for OCI runtime crun: invalid argument
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument
DEBU[0000] Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Using OCI runtime "/usr/lib/cri-o-runc/sbin/runc"
INFO[0000] Setting parallel job count to 49
DEBU[0000] Adding port mapping from 5000 to 5000 length 1 protocol ""
DEBU[0000] Pulling image some.gitlab.com/my/backend (policy: missing)
DEBU[0000] Looking up image "some.gitlab.com/my/backend" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] }
DEBU[0000] Trying "some.gitlab.com/my/backend:latest" ...
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage]@e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4"
DEBU[0000] Found image "some.gitlab.com/my/backend" as "some.gitlab.com/my/backend:latest" in local containers storage
DEBU[0000] Found image "some.gitlab.com/my/backend" as "some.gitlab.com/my/backend:latest" in local containers storage
([overlay@/var/lib/containers/storage+/run/containers/storage]@e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4)
DEBU[0000] exporting opaque data as blob "sha256:e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4"
DEBU[0000] Looking up image "some.gitlab.com/my/backend:latest" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] }
DEBU[0000] Trying "some.gitlab.com/my/backend:latest" ...
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage]@e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4"
DEBU[0000] Found image "some.gitlab.com/my/backend:latest" as "some.gitlab.com/my/backend:latest" in local containers storage
DEBU[0000] Found image "some.gitlab.com/my/backend:latest" as "some.gitlab.com/my/backend:latest" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage]@e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4)
DEBU[0000] exporting opaque data as blob "sha256:e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4"
DEBU[0000] Looking up image "some.gitlab.com/my/backend" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] }
DEBU[0000] Trying "some.gitlab.com/my/backend:latest" ...
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage]@e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4"
DEBU[0000] Found image "some.gitlab.com/my/backend" as "some.gitlab.com/my/backend:latest" in local containers storage
DEBU[0000] Found image "some.gitlab.com/my/backend" as "some.gitlab.com/my/backend:latest" in local containers storage
([overlay@/var/lib/containers/storage+/run/containers/storage]@e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4)
DEBU[0000] exporting opaque data as blob "sha256:e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4"
DEBU[0000] Inspecting image e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4
DEBU[0000] exporting opaque data as blob "sha256:e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4"
DEBU[0000] exporting opaque data as blob "sha256:e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4"
DEBU[0000] Inspecting image e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4
DEBU[0000] Inspecting image e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4
DEBU[0000] using systemd mode: false
DEBU[0000] adding container to pod pod_composeenv
DEBU[0000] setting container name composeenv_backend_1
DEBU[0000] No hostname set; container's hostname will default to runtime default
DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json"
DEBU[0000] Successfully loaded network composeenv_default: &{composeenv_default 0c35be9774cfbec7be1e4c0d659260e035e087500c1f70c9d49d4d664926c384 bridge podman1 2024-09-02 12:45:17.660370092 +0000 UTC [{{{10.89.0.0 ffffff00}} 10.89.0.1 <nil>}] [] false false true [] map[com.docker.compose.project:composeenv io.podman.compose.project:composeenv] map[] map[driver:host-local]}
DEBU[0000] Successfully loaded network otherproj-server-main-090-01_default: &{otherproj-server-main-090-01_default 8e3d2b1edb4b4a4e1eae8d001673b00270fd18a8830c098e453a5614e14f4fb9 bridge podman2 2024-08-21 10:50:09.441145834 +0000 UTC [{{{10.89.1.0 ffffff00}} 10.89.1.1 <nil>}] [] false false true [] map[com.docker.compose.project:otherproj-server-main-090-01 io.podman.compose.project:otherproj-server-main-090-01] map[] map[driver:host-local]}
DEBU[0000] Successfully loaded network otherproj-server-main-090-02_default: &{otherproj-server-main-090-02_default df49c9dda7a4805c842daabab29d97b55d457406bd1c8e7ca5da3fd363ecd751 bridge podman3 2024-08-22 10:04:02.518842701 +0000 UTC [{{{10.89.2.0 ffffff00}} 10.89.2.1 <nil>}] [] false false true [] map[com.docker.compose.project:otherproj-server-main-090-02 io.podman.compose.project:otherproj-server-main-090-02] map[] map[driver:host-local]}
DEBU[0000] Successfully loaded network otherproj-server-main-090-03_default: &{otherproj-server-main-090-03_default 1e9e711c5699e839b420a66b422b97b36e95361f0b0fbccf031d8d1d13cdf109 bridge podman4 2024-08-28 13:07:45.975049378 +0000 UTC [{{{10.89.3.0 ffffff00}} 10.89.3.1 <nil>}] [] false false true [] map[com.docker.compose.project:otherproj-server-main-090-03 io.podman.compose.project:otherproj-server-main-090-03] map[] map[driver:host-local]}
DEBU[0000] Successfully loaded 5 networks
DEBU[0000] Allocated lock 1 for container 63be34a6b59fe1665ebdd252e129c52bf58ad3e6ac0b6cfb477ab4fd1e0991f1
DEBU[0000] exporting opaque data as blob "sha256:e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4"
DEBU[0000] Cached value indicated that idmapped mounts for overlay are not supported
DEBU[0000] Check for idmapped mounts support
DEBU[0000] Created container "63be34a6b59fe1665ebdd252e129c52bf58ad3e6ac0b6cfb477ab4fd1e0991f1"
DEBU[0000] Container "63be34a6b59fe1665ebdd252e129c52bf58ad3e6ac0b6cfb477ab4fd1e0991f1" has work directory "/var/lib/containers/storage/overlay-containers/63be34a6b59fe1665ebdd252e129c52bf58ad3e6ac0b6cfb477ab4fd1e0991f1/userdata"
DEBU[0000] Container "63be34a6b59fe1665ebdd252e129c52bf58ad3e6ac0b6cfb477ab4fd1e0991f1" has run directory "/run/containers/storage/overlay-containers/63be34a6b59fe1665ebdd252e129c52bf58ad3e6ac0b6cfb477ab4fd1e0991f1/userdata"
63be34a6b59fe1665ebdd252e129c52bf58ad3e6ac0b6cfb477ab4fd1e0991f1
DEBU[0000] Called create.PersistentPostRunE(podman create --log-level debug --name=composeenv_backend_1 --pod=pod_composeenv --label io.podman.compose.config-hash=b92f1a68a9fd51a1885046ca76639c6c1acad12c2774eb6d70fc2f7f8bce06cf --label io.podman.compose.project=composeenv --label io.podman.compose.version=1.2.0 --label PODMAN_SYSTEMD_UNIT=podman-compose@composeenv.service --label com.docker.compose.project=composeenv --label com.docker.compose.project.working_dir=/root/containers/composeenv --label com.docker.compose.project.config_files=compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=backend -e ASPNETCORE_URLS=http://*:5000 -e DOTNET_ENVIRONMENT=Docker.Test -e ASPNETCORE_ENVIRONMENT=Docker.Test --network=composeenv_default --network-alias=backend --secret ConnectionStrings__GeoServicesDb -p 5000:5000 --restart always some.gitlab.com/my/backend)
DEBU[0000] Shutting down engines
INFO[0000] Received shutdown.Stop(), terminating!        PID=2275

# podman create --log-level debug --name=composeenv_textservice_1 --pod=pod_composeenv --label io.podman.compose.config-hash=b92f1a68a9fd51a1885046ca76639c6c1acad12c2774eb6d70fc2f7f8bce06cf --label io.podman.compose.project=composeenv --label io.podman.compose.version=1.2.0 --label PODMAN_SYSTEMD_UNIT=podman-compose@composeenv.service --label com.docker.compose.project=composeenv --label com.docker.compose.project.working_dir=/root/containers/composeenv --label com.docker.compose.project.config_files=compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=textservice -v composeenv_logs:/log -v composeenv_data:/data --network=composeenv_default --network-alias=textservice -p 5001:8080 --restart always some.gitlab.com/textservice
INFO[0000] podman filtering at log level debug
DEBU[0000] Called create.PersistentPreRunE(podman create --log-level debug --name=composeenv_textservice_1 --pod=pod_composeenv --label io.podman.compose.config-hash=b92f1a68a9fd51a1885046ca76639c6c1acad12c2774eb6d70fc2f7f8bce06cf --label io.podman.compose.project=composeenv --label io.podman.compose.version=1.2.0 --label PODMAN_SYSTEMD_UNIT=podman-compose@composeenv.service --label com.docker.compose.project=composeenv --label com.docker.compose.project.working_dir=/root/containers/composeenv --label com.docker.compose.project.config_files=compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=textservice -v composeenv_logs:/log -v composeenv_data:/data --network=composeenv_default --network-alias=textservice -p 5001:8080 --restart always some.gitlab.com/textservice)
DEBU[0000] Using conmon: "/usr/bin/conmon"
INFO[0000] Using sqlite as database backend
DEBU[0000] Using graph driver
DEBU[0000] Using graph root /var/lib/containers/storage
DEBU[0000] Using run root /run/containers/storage
DEBU[0000] Using static dir /var/lib/containers/storage/libpod
DEBU[0000] Using tmp dir /run/libpod
DEBU[0000] Using volume path /var/lib/containers/storage/volumes
DEBU[0000] Using transient store: false
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that metacopy is not being used
DEBU[0000] Cached value indicated that native-diff is usable
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
INFO[0000] [graphdriver] using prior storage driver: overlay
DEBU[0000] Initializing event backend journald
DEBU[0000] Configured OCI runtime crun initialization failed: no valid executable found for OCI runtime crun: invalid argument
DEBU[0000] Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
DEBU[0000] Using OCI runtime "/usr/lib/cri-o-runc/sbin/runc"
INFO[0000] Setting parallel job count to 49
DEBU[0000] Adding port mapping from 5001 to 8080 length 1 protocol ""
DEBU[0000] Pulling image some.gitlab.com/textservice (policy: missing)
DEBU[0000] Looking up image "some.gitlab.com/textservice" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] }
DEBU[0000] Trying "some.gitlab.com/textservice:latest" ...
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage]@d0e8ae5909b59f9dee017f5044e04964ac2c01f78cdb0758109377dbed608692"
DEBU[0000] Found image "some.gitlab.com/textservice" as "some.gitlab.com/textservice:latest" in local containers storage
DEBU[0000] Found image "some.gitlab.com/textservice" as "some.gitlab.com/textservice:latest" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage]@d0e8ae5909b59f9dee017f5044e04964ac2c01f78cdb0758109377dbed608692)
DEBU[0000] exporting opaque data as blob "sha256:d0e8ae5909b59f9dee017f5044e04964ac2c01f78cdb0758109377dbed608692"
DEBU[0000] Looking up image "some.gitlab.com/textservice:latest" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] }
DEBU[0000] Trying "some.gitlab.com/textservice:latest" ...
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage]@d0e8ae5909b59f9dee017f5044e04964ac2c01f78cdb0758109377dbed608692"
DEBU[0000] Found image "some.gitlab.com/textservice:latest" as "some.gitlab.com/textservice:latest" in local containers storage
DEBU[0000] Found image "some.gitlab.com/textservice:latest" as "some.gitlab.com/textservice:latest" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage]@d0e8ae5909b59f9dee017f5044e04964ac2c01f78cdb0758109377dbed608692)
DEBU[0000] exporting opaque data as blob "sha256:d0e8ae5909b59f9dee017f5044e04964ac2c01f78cdb0758109377dbed608692"
DEBU[0000] User mount composeenv_logs:/log options []
DEBU[0000] User mount composeenv_data:/data options []
DEBU[0000] Looking up image "some.gitlab.com/textservice" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] }
DEBU[0000] Trying "some.gitlab.com/textservice:latest" ...
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage]@d0e8ae5909b59f9dee017f5044e04964ac2c01f78cdb0758109377dbed608692"
DEBU[0000] Found image "some.gitlab.com/textservice" as "some.gitlab.com/textservice:latest" in local containers storage
DEBU[0000] Found image "some.gitlab.com/textservice" as "some.gitlab.com/textservice:latest" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage]@d0e8ae5909b59f9dee017f5044e04964ac2c01f78cdb0758109377dbed608692)
DEBU[0000] exporting opaque data as blob "sha256:d0e8ae5909b59f9dee017f5044e04964ac2c01f78cdb0758109377dbed608692"
DEBU[0000] Inspecting image d0e8ae5909b59f9dee017f5044e04964ac2c01f78cdb0758109377dbed608692
DEBU[0000] exporting opaque data as blob "sha256:d0e8ae5909b59f9dee017f5044e04964ac2c01f78cdb0758109377dbed608692"
DEBU[0000] exporting opaque data as blob "sha256:d0e8ae5909b59f9dee017f5044e04964ac2c01f78cdb0758109377dbed608692"
DEBU[0000] Inspecting image d0e8ae5909b59f9dee017f5044e04964ac2c01f78cdb0758109377dbed608692
DEBU[0000] Inspecting image d0e8ae5909b59f9dee017f5044e04964ac2c01f78cdb0758109377dbed608692
DEBU[0000] using systemd mode: false
DEBU[0000] adding container to pod pod_composeenv
DEBU[0000] setting container name composeenv_textservice_1
DEBU[0000] No hostname set; container's hostname will default to runtime default
DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json"
DEBU[0000] Successfully loaded network composeenv_default: &{composeenv_default 0c35be9774cfbec7be1e4c0d659260e035e087500c1f70c9d49d4d664926c384 bridge podman1 2024-09-02 12:45:17.660370092 +0000 UTC [{{{10.89.0.0 ffffff00}} 10.89.0.1 <nil>}] [] false false true [] map[com.docker.compose.project:composeenv io.podman.compose.project:composeenv] map[] map[driver:host-local]}
DEBU[0000] Successfully loaded network otherproj-server-main-090-01_default: &{otherproj-server-main-090-01_default 8e3d2b1edb4b4a4e1eae8d001673b00270fd18a8830c098e453a5614e14f4fb9 bridge podman2 2024-08-21 10:50:09.441145834 +0000 UTC [{{{10.89.1.0 ffffff00}} 10.89.1.1 <nil>}] [] false false true [] map[com.docker.compose.project:otherproj-server-main-090-01 io.podman.compose.project:otherproj-server-main-090-01] map[] map[driver:host-local]}
DEBU[0000] Successfully loaded network otherproj-server-main-090-02_default: &{otherproj-server-main-090-02_default df49c9dda7a4805c842daabab29d97b55d457406bd1c8e7ca5da3fd363ecd751 bridge podman3 2024-08-22 10:04:02.518842701 +0000 UTC [{{{10.89.2.0 ffffff00}} 10.89.2.1 <nil>}] [] false false true [] map[com.docker.compose.project:otherproj-server-main-090-02 io.podman.compose.project:otherproj-server-main-090-02] map[] map[driver:host-local]}
DEBU[0000] Successfully loaded network otherproj-server-main-090-03_default: &{otherproj-server-main-090-03_default 1e9e711c5699e839b420a66b422b97b36e95361f0b0fbccf031d8d1d13cdf109 bridge podman4 2024-08-28 13:07:45.975049378 +0000 UTC [{{{10.89.3.0 ffffff00}} 10.89.3.1 <nil>}] [] false false true [] map[com.docker.compose.project:otherproj-server-main-090-03 io.podman.compose.project:otherproj-server-main-090-03] map[] map[driver:host-local]}
DEBU[0000] Successfully loaded 5 networks
DEBU[0000] Allocated lock 4 for container 0cd4a8dd0c2a3f3c8558a38a501428f7be01786bd61aedb7ebe40c1641abbe2c
DEBU[0000] exporting opaque data as blob "sha256:d0e8ae5909b59f9dee017f5044e04964ac2c01f78cdb0758109377dbed608692"
DEBU[0000] Cached value indicated that idmapped mounts for overlay are not supported
DEBU[0000] Check for idmapped mounts support
DEBU[0000] Created container "0cd4a8dd0c2a3f3c8558a38a501428f7be01786bd61aedb7ebe40c1641abbe2c"
DEBU[0000] Container "0cd4a8dd0c2a3f3c8558a38a501428f7be01786bd61aedb7ebe40c1641abbe2c" has work directory "/var/lib/containers/storage/overlay-containers/0cd4a8dd0c2a3f3c8558a38a501428f7be01786bd61aedb7ebe40c1641abbe2c/userdata"
DEBU[0000] Container "0cd4a8dd0c2a3f3c8558a38a501428f7be01786bd61aedb7ebe40c1641abbe2c" has run directory "/run/containers/storage/overlay-containers/0cd4a8dd0c2a3f3c8558a38a501428f7be01786bd61aedb7ebe40c1641abbe2c/userdata"
0cd4a8dd0c2a3f3c8558a38a501428f7be01786bd61aedb7ebe40c1641abbe2c
DEBU[0000] Called create.PersistentPostRunE(podman create --log-level debug --name=composeenv_textservice_1 --pod=pod_composeenv --label io.podman.compose.config-hash=b92f1a68a9fd51a1885046ca76639c6c1acad12c2774eb6d70fc2f7f8bce06cf --label io.podman.compose.project=composeenv --label io.podman.compose.version=1.2.0 --label PODMAN_SYSTEMD_UNIT=podman-compose@composeenv.service --label com.docker.compose.project=composeenv --label com.docker.compose.project.working_dir=/root/containers/composeenv --label com.docker.compose.project.config_files=compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=textservice -v composeenv_logs:/log -v composeenv_data:/data --network=composeenv_default --network-alias=textservice -p 5001:8080 --restart always some.gitlab.com/textservice)
DEBU[0000] Shutting down engines
INFO[0000] Received shutdown.Stop(), terminating!        PID=2341
Luap99 commented 2 weeks ago

These are just the create commands, nothing seems to be started there, you need to run something like podman --log-level debug pod start pod_composeenv to start all containers in the pod, this should tell what is going on for the network setup

FrankyBoy commented 2 weeks ago

Yes I also just noticed ... major facepalm moment. Thanks for all your patience 😅 Idk why even, now the dry-run gave me a proper run command 🤦‍♂️ Anyhow, here we go.

# podman run --log-level debug --name=composeenv_backend_1 -d --pod=pod_composeenv --label io.podman.compose.config-hash=b92f1a68a9fd51a1885046ca76639c6c1acad12c2774eb6d70fc2f7f8bce06cf --label io.podman.compose.project=composeenv --label io.podman.compose.version=1.2.0 --label PODMAN_SYSTEMD_UNIT=podman-compose@composeenv.service --label com.docker.compose.project=composeenv --label com.docker.compose.project.working_dir=/root/containers/composeenv --label com.docker.compose.project.config_files=compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=backend -e ASPNETCORE_URLS=http://*:5000 -e DOTNET_ENVIRONMENT=Docker.Test -e ASPNETCORE_ENVIRONMENT=Docker.Test --network=composeenv_default --network-alias=backend --secret ConnectionStrings__GeoServicesDb -p 5000:5000 --restart always registry.somegitlab.com/backend_proj
INFO[0000] podman filtering at log level debug
DEBU[0000] Called run.PersistentPreRunE(podman run --log-level debug --name=composeenv_backend_1 -d --pod=pod_composeenv --label io.podman.compose.config-hash=b92f1a68a9fd51a1885046ca76639c6c1acad12c2774eb6d70fc2f7f8bce06cf --label io.podman.compose.project=composeenv --label io.podman.compose.version=1.2.0 --label PODMAN_SYSTEMD_UNIT=podman-compose@composeenv.service --label com.docker.compose.project=composeenv --label com.docker.compose.project.working_dir=/root/containers/composeenv --label com.docker.compose.project.config_files=compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=backend -e ASPNETCORE_URLS=http://*:5000 -e DOTNET_ENVIRONMENT=Docker.Test -e ASPNETCORE_ENVIRONMENT=Docker.Test --network=composeenv_default --network-alias=backend --secret ConnectionStrings__GeoServicesDb -p 5000:5000 --restart always registry.somegitlab.com/backend_proj)
DEBU[0000] Using conmon: "/usr/bin/conmon"
INFO[0000] Using sqlite as database backend
DEBU[0000] Using graph driver
DEBU[0000] Using graph root /var/lib/containers/storage
DEBU[0000] Using run root /run/containers/storage
DEBU[0000] Using static dir /var/lib/containers/storage/libpod
DEBU[0000] Using tmp dir /run/libpod
DEBU[0000] Using volume path /var/lib/containers/storage/volumes
DEBU[0000] Using transient store: false
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that metacopy is not being used
DEBU[0000] Cached value indicated that native-diff is usable
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
INFO[0000] [graphdriver] using prior storage driver: overlay
DEBU[0000] Initializing event backend journald
DEBU[0000] Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument
DEBU[0000] Configured OCI runtime crun initialization failed: no valid executable found for OCI runtime crun: invalid argument
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
DEBU[0000] Using OCI runtime "/usr/lib/cri-o-runc/sbin/runc"
INFO[0000] Setting parallel job count to 49
DEBU[0000] Adding port mapping from 5000 to 5000 length 1 protocol ""
DEBU[0000] Pulling image registry.somegitlab.com/backend_proj (policy: missing)
DEBU[0000] Looking up image "registry.somegitlab.com/backend_proj" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] }
DEBU[0000] Trying "registry.somegitlab.com/backend_proj:latest" ...
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage]@e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4"
DEBU[0000] Found image "registry.somegitlab.com/backend_proj" as "registry.somegitlab.com/backend_proj:latest" in local containers storage
DEBU[0000] Found image "registry.somegitlab.com/backend_proj" as "registry.somegitlab.com/backend_proj:latest" in local containers storage
([overlay@/var/lib/containers/storage+/run/containers/storage]@e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4)
DEBU[0000] exporting opaque data as blob "sha256:e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4"
DEBU[0000] Looking up image "registry.somegitlab.com/backend_proj:latest" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] }
DEBU[0000] Trying "registry.somegitlab.com/backend_proj:latest" ...
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage]@e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4"
DEBU[0000] Found image "registry.somegitlab.com/backend_proj:latest" as "registry.somegitlab.com/backend_proj:latest" in local containers storage
DEBU[0000] Found image "registry.somegitlab.com/backend_proj:latest" as "registry.somegitlab.com/backend_proj:latest" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage]@e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4)
DEBU[0000] exporting opaque data as blob "sha256:e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4"
DEBU[0000] Looking up image "registry.somegitlab.com/backend_proj" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] }
DEBU[0000] Trying "registry.somegitlab.com/backend_proj:latest" ...
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage]@e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4"
DEBU[0000] Found image "registry.somegitlab.com/backend_proj" as "registry.somegitlab.com/backend_proj:latest" in local containers storage
DEBU[0000] Found image "registry.somegitlab.com/backend_proj" as "registry.somegitlab.com/backend_proj:latest" in local containers storage
([overlay@/var/lib/containers/storage+/run/containers/storage]@e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4)
DEBU[0000] exporting opaque data as blob "sha256:e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4"
DEBU[0000] Inspecting image e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4
DEBU[0000] exporting opaque data as blob "sha256:e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4"
DEBU[0000] exporting opaque data as blob "sha256:e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4"
DEBU[0000] Inspecting image e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4
DEBU[0000] Inspecting image e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4
DEBU[0000] Inspecting image e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4
DEBU[0000] using systemd mode: false
DEBU[0000] adding container to pod pod_composeenv
DEBU[0000] setting container name composeenv_backend_1
DEBU[0000] No hostname set; container's hostname will default to runtime default
DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json"
DEBU[0000] Successfully loaded network composeenv_default: &{composeenv_default b3f4e6c6651da3c9a22606d54345b464e1acaa3697f6c079d575eb74ff5612a7 bridge podman1 2024-09-02 13:15:42.130700749 +0000 UTC [{{{10.89.0.0 ffffff00}} 10.89.0.1 <nil>}] [] false false true [] map[com.docker.compose.project:composeenv io.podman.compose.project:composeenv] map[] map[driver:host-local]}
DEBU[0000] Successfully loaded network otherproj-server-main-090-01_default: &{otherproj-server-main-090-01_default 8e3d2b1edb4b4a4e1eae8d001673b00270fd18a8830c098e453a5614e14f4fb9 bridge podman2 2024-08-21 10:50:09.441145834 +0000 UTC [{{{10.89.1.0 ffffff00}} 10.89.1.1 <nil>}] [] false false true [] map[com.docker.compose.project:otherproj-server-main-090-01 io.podman.compose.project:otherproj-server-main-090-01] map[] map[driver:host-local]}
DEBU[0000] Successfully loaded network otherproj-server-main-090-02_default: &{otherproj-server-main-090-02_default df49c9dda7a4805c842daabab29d97b55d457406bd1c8e7ca5da3fd363ecd751 bridge podman3 2024-08-22 10:04:02.518842701 +0000 UTC [{{{10.89.2.0 ffffff00}} 10.89.2.1 <nil>}] [] false false true [] map[com.docker.compose.project:otherproj-server-main-090-02 io.podman.compose.project:otherproj-server-main-090-02] map[] map[driver:host-local]}
DEBU[0000] Successfully loaded network otherproj-server-main-090-03_default: &{otherproj-server-main-090-03_default 1e9e711c5699e839b420a66b422b97b36e95361f0b0fbccf031d8d1d13cdf109 bridge podman4 2024-08-28 13:07:45.975049378 +0000 UTC [{{{10.89.3.0 ffffff00}} 10.89.3.1 <nil>}] [] false false true [] map[com.docker.compose.project:otherproj-server-main-090-03 io.podman.compose.project:otherproj-server-main-090-03] map[] map[driver:host-local]}
DEBU[0000] Successfully loaded 5 networks
DEBU[0000] Allocated lock 1 for container 39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788
DEBU[0000] exporting opaque data as blob "sha256:e42f592b4bb0a3e0075cc51749e7c61ef66c83e8bac8f0b445be908390ab2ab4"
DEBU[0000] Check for idmapped mounts support create mapped mount: operation not permitted
DEBU[0000] Created container "39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788"
DEBU[0000] Container "39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788" has work directory "/var/lib/containers/storage/overlay-containers/39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788/userdata"
DEBU[0000] Container "39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788" has run directory "/run/containers/storage/overlay-containers/39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788/userdata"
DEBU[0000] overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/RBG5QEV2LBDLIIY7U2KJCA3Y7G:/var/lib/containers/storage/overlay/l/7IPCIXAVWHPFZYLW4VUQVZN3G6:/var/lib/containers/storage/overlay/l/UMQYS5ISFMD7Q7XBU3HRCMUT6R:/var/lib/containers/storage/overlay/l/VDTMSUQUEP53DXN4VMVTOOVPNB:/var/lib/containers/storage/overlay/l/6KXYADYJ2SHPRTTJHGM6JSLXDV:/var/lib/containers/storage/overlay/l/PJY6I6R7BDP7Y5VNNZ4XS6OBFR:/var/lib/containers/storage/overlay/l/CA6R7S7Y7UXOXRGUD4VNMLN5FO,upperdir=/var/lib/containers/storage/overlay/fcfdee79b0ce8862cf272b7048361ea4d7fe1fd1b38ea18308f05a40fc893b82/diff,workdir=/var/lib/containers/storage/overlay/fcfdee79b0ce8862cf272b7048361ea4d7fe1fd1b38ea18308f05a40fc893b82/work,userxattr
DEBU[0000] Mounted container "39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788" at "/var/lib/containers/storage/overlay/fcfdee79b0ce8862cf272b7048361ea4d7fe1fd1b38ea18308f05a40fc893b82/merged"
DEBU[0000] Created root filesystem for container 39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788 at /var/lib/containers/storage/overlay/fcfdee79b0ce8862cf272b7048361ea4d7fe1fd1b38ea18308f05a40fc893b82/merged
DEBU[0000] Made network namespace at /run/user/0/netns/netns-ef6c005f-bbee-d108-fa12-6b8c1eb7c0d9 for container 39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788
[DEBUG netavark::network::validation] "Validating network namespace..."
[DEBUG netavark::commands::setup] "Setting up..."
[INFO  netavark::firewall] Using iptables firewall driver
[DEBUG netavark::network::bridge] Setup network composeenv_default
[DEBUG netavark::network::bridge] Container interface name: eth0 with IP addresses [10.89.0.2/24]
[DEBUG netavark::network::bridge] Bridge name: podman1 with IP addresses [10.89.0.1/24]
[DEBUG netavark::network::core_utils] Setting sysctl value for net.ipv4.ip_forward to 1
[DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv4/conf/podman1/rp_filter to 2
[DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv6/conf/eth0/autoconf to 0
[DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv4/conf/eth0/arp_notify to 1
[DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv4/conf/eth0/rp_filter to 2
[INFO  netavark::network::netlink] Adding route (dest: 0.0.0.0/0 ,gw: 10.89.0.1, metric 100)
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-574D093ADAB99 created on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_ISOLATION_2 created on table filter
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_ISOLATION_3 created on table filter
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_INPUT created on table filter
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_FORWARD created on table filter
[DEBUG netavark::firewall::varktables::helpers] rule -d 10.89.0.0/24 -j ACCEPT created on table nat and chain NETAVARK-574D093ADAB99
[DEBUG netavark::firewall::varktables::helpers] rule ! -d 224.0.0.0/4 -j MASQUERADE created on table nat and chain NETAVARK-574D093ADAB99
[DEBUG netavark::firewall::varktables::helpers] rule -s 10.89.0.0/24 -j NETAVARK-574D093ADAB99 created on table nat and chain POSTROUTING
[DEBUG netavark::firewall::varktables::helpers] rule -p udp -s 10.89.0.0/24 --dport 53 -j ACCEPT created on table filter and chain NETAVARK_INPUT
[DEBUG netavark::firewall::varktables::helpers] rule -p tcp -s 10.89.0.0/24 --dport 53 -j ACCEPT created on table filter and chain NETAVARK_INPUT
[DEBUG netavark::firewall::varktables::helpers] rule -m conntrack --ctstate INVALID -j DROP created on table filter and chain NETAVARK_FORWARD
[DEBUG netavark::firewall::varktables::helpers] rule -d 10.89.0.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT created on table filter and chain NETAVARK_FORWARD
[DEBUG netavark::firewall::varktables::helpers] rule -s 10.89.0.0/24 -j ACCEPT created on table filter and chain NETAVARK_FORWARD
[DEBUG netavark::network::core_utils] Setting sysctl value for net.ipv4.conf.podman1.route_localnet to 1
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-SETMARK created on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-MASQ created on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-DN-574D093ADAB99 created on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-DNAT created on table nat
[DEBUG netavark::firewall::varktables::helpers] rule -j MARK  --set-xmark 0x2000/0x2000 created on table nat and chain NETAVARK-HOSTPORT-SETMARK
[DEBUG netavark::firewall::varktables::helpers] rule -j MASQUERADE -m comment --comment 'netavark portfw masq mark' -m mark --mark 0x2000/0x2000 created on table nat and chain NETAVARK-HOSTPORT-MASQ
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-SETMARK -s 10.89.0.0/24 -p tcp --dport 5000 created on table nat and chain NETAVARK-DN-574D093ADAB99
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-SETMARK -s 127.0.0.1 -p tcp --dport 5000 created on table nat and chain NETAVARK-DN-574D093ADAB99
[DEBUG netavark::firewall::varktables::helpers] rule -j DNAT -p tcp --to-destination 10.89.0.2:5000 --destination-port 5000 created on table nat and chain NETAVARK-DN-574D093ADAB99
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-DN-574D093ADAB99 -p tcp --dport 5000 -m comment --comment 'dnat name: composeenv_default id: 39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788' created on table nat and chain NETAVARK-HOSTPORT-DNAT
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-DNAT -m addrtype --dst-type LOCAL created on table nat and chain PREROUTING
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-DNAT -m addrtype --dst-type LOCAL created on table nat and chain OUTPUT
[DEBUG netavark::dns::aardvark] Spawning aardvark server
[DEBUG netavark::dns::aardvark] start aardvark-dns: ["systemd-run", "-q", "--scope", "/usr/libexec/podman/aardvark-dns", "--config", "/run/containers/storage/networks/aardvark-dns", "-p", "53", "run"]
[DEBUG netavark::commands::setup] {
        "composeenv_default": StatusBlock {
            dns_search_domains: Some(
                [
                    "dns.podman",
                ],
            ),
            dns_server_ips: Some(
                [
                    10.89.0.1,
                ],
            ),
            interfaces: Some(
                {
                    "eth0": NetInterface {
                        mac_address: "9e:01:fb:f2:8d:68",
                        subnets: Some(
                            [
                                NetAddress {
                                    gateway: Some(
                                        10.89.0.1,
                                    ),
                                    ipnet: 10.89.0.2/24,
                                },
                            ],
                        ),
                    },
                },
            ),
        },
    }
[DEBUG netavark::commands::setup] "Setup complete"
DEBU[0000] Not modifying container 39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788 /etc/passwd
DEBU[0000] Not modifying container 39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788 /etc/group
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode subscription
DEBU[0000] Setting Cgroups for container 39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788 to machine-libpod_pod_c4d679837eb096e5730dd0e8e0954661ff0ca9c65aa12c96c621eed104aeb2fb.slice:libpod:39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d
DEBU[0000] Workdir "/app" resolved to host path "/var/lib/containers/storage/overlay/fcfdee79b0ce8862cf272b7048361ea4d7fe1fd1b38ea18308f05a40fc893b82/merged/app"
DEBU[0000] Created OCI spec for container 39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788 at /var/lib/containers/storage/overlay-containers/39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788/userdata/config.json
DEBU[0000] Created cgroup path machine.slice/machine-libpod_pod_c4d679837eb096e5730dd0e8e0954661ff0ca9c65aa12c96c621eed104aeb2fb.slice for parent machine.slice and name libpod_pod_c4d679837eb096e5730dd0e8e0954661ff0ca9c65aa12c96c621eed104aeb2fb
DEBU[0000] Created cgroup machine.slice/machine-libpod_pod_c4d679837eb096e5730dd0e8e0954661ff0ca9c65aa12c96c621eed104aeb2fb.slice
DEBU[0000] Got pod cgroup as machine.slice/machine-libpod_pod_c4d679837eb096e5730dd0e8e0954661ff0ca9c65aa12c96c621eed104aeb2fb.slice
DEBU[0000] /usr/bin/conmon messages will be logged to syslog
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c 39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788 -u 39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788 -r /usr/lib/cri-o-runc/sbin/runc -b /var/lib/containers/storage/overlay-containers/39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788/userdata -p /run/containers/storage/overlay-containers/39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788/userdata/pidfile -n composeenv_backend_1 --exit-dir /run/libpod/exits --persist-dir /run/libpod/persist/39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788 --full-attach -s -l journald --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg  --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788]"
INFO[0000] Running conmon under slice machine-libpod_pod_c4d679837eb096e5730dd0e8e0954661ff0ca9c65aa12c96c621eed104aeb2fb.slice and unitName libpod-conmon-39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788.scope
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

DEBU[0000] Received: 1126
INFO[0000] Got Conmon PID as 1113
DEBU[0000] Created container 39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788 in OCI runtime
DEBU[0000] Adding nameserver(s) from network status of '["10.89.0.1"]'
DEBU[0000] Adding search domain(s) from network status of '["dns.podman"]'
DEBU[0000] Starting container 39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788 with command [dotnet WebApplication1.dll]
DEBU[0000] Started container 39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788
DEBU[0000] Notify sent successfully
39417587a014c987fd4e13e18fe177e330ce2eb5d9223033bf2f1d3198b1f788
DEBU[0000] Called run.PersistentPostRunE(podman run --log-level debug --name=composeenv_backend_1 -d --pod=pod_composeenv --label io.podman.compose.config-hash=b92f1a68a9fd51a1885046ca76639c6c1acad12c2774eb6d70fc2f7f8bce06cf --label io.podman.compose.project=composeenv --label io.podman.compose.version=1.2.0 --label PODMAN_SYSTEMD_UNIT=podman-compose@composeenv.service --label com.docker.compose.project=composeenv --label com.docker.compose.project.working_dir=/root/containers/composeenv --label com.docker.compose.project.config_files=compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=backend -e ASPNETCORE_URLS=http://*:5000 -e DOTNET_ENVIRONMENT=Docker.Test -e ASPNETCORE_ENVIRONMENT=Docker.Test --network=composeenv_default --network-alias=backend --secret ConnectionStrings__GeoServicesDb -p 5000:5000 --restart always registry.somegitlab.com/backend_proj)
DEBU[0000] Shutting down engines
INFO[0000] Received shutdown.Stop(), terminating!        PID=1008
Luap99 commented 2 weeks ago

Ok that looks fine in theory, and when you call stop on the container it fails to cleanup with the No such file or directory message again? How do you log in there? I assume you have XDG_RUNTIME_DIR set to /run/user/0 in you env and do you log out and log in again by any chance I wonder if systemd deletes /run/user/0 on logout in your case. Also can you juts run ls /run/user/0/netns it should show the created namespace files after the start and they should stay there until stop was called which should remove them but it seems something is removing them before already

FrankyBoy commented 2 weeks ago

.... what the ... Now I could do container stop, no error ... container rm ... no error ... even podman-compose up/down ... no errors. No problem binding on repeated down/up. What in the world is going on?!

I'd love to say "I changed xyz" ... but honestly I didn't do anything. I ran the same commands as podman-compose does based on it's --verbose output, so what made it work from one moment to the next is beyond me. Truly annoying, but I learned a lot on how to debug podman in case this problem shows up again in the future, so that's something as well :)

Sorry for taking your time and thank you again for your great support and guidance.

E: typo