containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
23.74k stars 2.41k forks source link

Quadlet wont start on boot, pasta says "Couldn't set IPv4 route(s) in guest: Invalid argument" #22190

Closed quietsoviet closed 7 months ago

quietsoviet commented 7 months ago

Issue Description

Quadlet wont start on boot tried plex, jellyfin and syncthing setting from arch podman wiki. But manualy starts no problems.

21:41:53 systemd: Failed to start Syncthing container.
21:41:52 passt.avx2: Couldn't set IPv4 route(s) in guest: Invalid argument
21:41:52 systemd: Failed to start Plex container.
21:41:52 passt.avx2: Couldn't set IPv4 route(s) in guest: Invalid argument
21:41:52 systemd: Failed to start Syncthing container.
21:41:52 passt.avx2: Couldn't set IPv4 route(s) in guest: Invalid argument
21:41:52 systemd: Failed to start Plex container.
21:41:52 passt.avx2: Couldn't set IPv4 route(s) in guest: Invalid argument
21:41:52 systemd: Failed to start Syncthing container.
21:41:52 passt.avx2: Couldn't set IPv4 route(s) in guest: Invalid argument
21:41:52 systemd: Failed to start Plex container.
21:41:51 passt.avx2: Couldn't set IPv4 route(s) in guest: Invalid argument
21:41:51 systemd: Failed to start Syncthing container.
21:41:51 passt.avx2: Couldn't set IPv4 route(s) in guest: Invalid argument
21:41:51 systemd: Failed to start Plex container.
21:41:51 passt.avx2: Couldn't set IPv4 route(s) in guest: Invalid argument
21:41:51 systemd: Failed to start Syncthing container.
21:41:51 passt.avx2: Couldn't set IPv4 route(s) in guest: Invalid argument
21:41:51 systemd: Failed to start Plex container.
21:41:51 passt.avx2: Couldn't set IPv4 route(s) in guest: Invalid argument
21:41:51 systemd: Failed to start Syncthing container.
21:41:50 passt.avx2: Couldn't set IPv4 route(s) in guest: Invalid argument
21:41:50 systemd: Failed to start Syncthing container.
21:41:50 passt.avx2: Couldn't set IPv4 route(s) in guest: Invalid argument
21:41:50 systemd: Failed to start Plex container.
21:41:50 passt.avx2: Couldn't set IPv4 route(s) in guest: Invalid argument
21:41:50 systemd: Failed to start Syncthing container.
21:41:50 passt.avx2: Couldn't set IPv4 route(s) in guest: Invalid argument
21:41:50 systemd: Failed to start Plex container.
21:41:49 passt.avx2: Couldn't set IPv4 route(s) in guest: Invalid argument
21:41:49 systemd: Failed to start Syncthing container.
21:41:49 passt.avx2: Couldn't set IPv4 route(s) in guest: Invalid argument
21:41:49 systemd: Failed to start Plex container.
21:41:49 passt.avx2: Couldn't set IPv4 route(s) in guest: Invalid argument
21:41:49 systemd: Failed to start Syncthing container.
21:41:49 passt.avx2: Couldn't set IPv4 route(s) in guest: Invalid argument
21:41:49 systemd: Failed to start Plex container.
21:41:49 passt.avx2: Couldn't set IPv4 route(s) in guest: Invalid argument
21:41:49 systemd: Failed to start Syncthing container.
21:41:49 passt.avx2: Couldn't set IPv4 route(s) in guest: Invalid argument
21:41:48 systemd: Failed to start Plex container.
21:41:48 passt.avx2: Couldn't set IPv4 route(s) in guest: Invalid argument
21:41:48 systemd: Failed to start Syncthing container.
21:41:48 passt.avx2: Couldn't set IPv4 route(s) in guest: Invalid argument
21:41:48 systemd: Failed to start Syncthing container.
21:41:48 passt.avx2: Couldn't set IPv4 route(s) in guest: Invalid argument
21:41:48 systemd: Failed to start Syncthing container.
21:41:47 passt.avx2: Couldn't set IPv4 route(s) in guest: Invalid argument
21:41:47 systemd: Failed to start Syncthing container.
21:41:47 passt.avx2: Couldn't set IPv4 route(s) in guest: Invalid argument
21:41:47 systemd: Failed to start npm.service.

And then containers exits with Start request repeated too quickly.

× npm.service
     Loaded: loaded (/home/matija/.config/containers/systemd/npm.container; generated)
     Active: failed (Result: exit-code) since Wed 2024-03-27 21:41:47 CET; 52min ago
    Process: 1584 ExecStart=/usr/bin/podman run --name=npm --cidfile=/run/user/1000/npm.cid --replace --rm --cgroups=split --network=npm_proxy --sdnotify=conmon -d -v /home/matija/Documents/podman/npm/config:/config -v /home/matija/Documents/podman/npm/letsencrypt:/etc/letsencrypt:Z -v /home/matija/Documents/podman/npm/data:/data --label io.containers.autoupdate=registry --publish 80:80/tcp --publish 443:443/tcp --publish 81:81/tcp --env DB_SQLITE_FILE=/config/database.sqlite --env DISABLE_IPV6=true docker.io/jc21/nginx-proxy-manager:latest (code=exited, status=125)
    Process: 1627 ExecStopPost=/usr/bin/podman rm -v -f -i --cidfile=/run/user/1000/npm.cid (code=exited, status=0/SUCCESS)
   Main PID: 1584 (code=exited, status=125)
        CPU: 97ms

ožu 27 21:41:47 archNAS systemd[557]: Failed to start npm.service.
ožu 27 21:41:47 archNAS systemd[557]: npm.service: Scheduled restart job, restart counter is at 6.
ožu 27 21:41:47 archNAS systemd[557]: npm.service: Start request repeated too quickly.
ožu 27 21:41:47 archNAS systemd[557]: npm.service: Failed with result 'exit-code'.
ožu 27 21:41:47 archNAS systemd[557]: Failed to start npm.service.

Then i added StartLimitBurst=0 to quadlet

● syncthing-lsio.service - Syncthing container
     Loaded: loaded (/home/matija/.config/containers/systemd/syncthing-lsio.container; generated)
     Active: active (running) since Wed 2024-03-27 21:41:53 CET; 1h 2min ago
   Main PID: 3435 (conmon)
      Tasks: 33 (limit: 27397)
     Memory: 113.2M (peak: 114.7M)
        CPU: 4.024s
     CGroup: /user.slice/user-1000.slice/user@1000.service/app.slice/syncthing-lsio.service
             ├─libpod-payload-13b2ae73af680e6e872a2d2ace147b6cc9bb0db08a5f6b70c05f9105673cef25
             │ ├─3440 /package/admin/s6/command/s6-svscan -d4 -- /run/service
             │ ├─3500 s6-supervise s6-linux-init-shutdownd
             │ ├─3503 /package/admin/s6-linux-init/command/s6-linux-init-shutdownd -d3 -c /run/s6/basedir -g 3000 -C -B
             │ ├─3546 s6-supervise svc-cron
             │ ├─3547 s6-supervise svc-syncthing
             │ ├─3548 s6-supervise s6rc-oneshot-runner
             │ ├─3549 s6-supervise s6rc-fdholder
             │ ├─3557 /package/admin/s6/command/s6-ipcserverd -1 -- /package/admin/s6/command/s6-ipcserver-access -v0 -E -l0 -i data/rules -- /package/admin/s6/command/s6-sudod -t 30000 -- /package/admin/s6-rc/command/s6-rc-oneshot-run -l ../.. --
             │ ├─3659 busybox crond -f -S -l 5
             │ ├─3661 syncthing -home=/config -no-browser -no-restart --gui-address=0.0.0.0:8384
             │ └─3691 /usr/bin/syncthing -home=/config -no-browser -no-restart --gui-address=0.0.0.0:8384
             └─runtime
               ├─3435 /usr/bin/conmon --api-version 1 -c 13b2ae73af680e6e872a2d2ace147b6cc9bb0db08a5f6b70c05f9105673cef25 -u 13b2ae73af680e6e872a2d2ace147b6cc9bb0db08a5f6b70c05f9105673cef25 -r /usr/bin/crun -b /home/matija/.local/share/containers/storage/overlay-containers/13b2ae73af680e6e872a2d2ace147b6cc9bb0db08a5f6b70c05f9105673cef25/userdata -p /run/user/1000/containers/overlay-containers/13b2ae73af680e6e872a2d2ace147b6cc9bb0db08a5f6b70c05f9105673cef25/userdata/pidfile -n syncthing --exit-dir /run/user/1000/libpod/tmp/exits --persist-dir /run/user/1000/libpod/tmp/persist/13b2ae73af680e6e872a2d2ace147b6cc9bb0db08a5f6b70c05f9105673cef25 --full-attach -l none --log-level warning --syslog --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/run/user/1000/containers/overlay-containers/13b2ae73af680e6e872a2d2ace147b6cc9bb0db08a5f6b70c05f9105673cef25/userdata/oci-log --conmon-pidfile /run/user/1000/containers/overlay-containers/13b2ae73af680e6e872a2d2ace147b6cc9bb0db08a5f6b70c05f9105673cef25/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/matija/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg warning --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg "" --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /home/matija/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg 13b2ae73af680e6e872a2d2ace147b6cc9bb0db08a5f6b70c05f9105673cef25
               └─3453 /usr/bin/pasta --config-net -t 22000-22000:22000-22000 -t 127.0.0.1/8384-8384:8384-8384 -u none -T none -U none --no-map-gw --dns none --quiet --netns /run/user/1000/netns/netns-b29c50aa-1dfb-10f4-f361-f05795788cd3

ožu 27 21:41:53 archNAS systemd[557]: syncthing-lsio.service: Scheduled restart job, restart counter is at 18.
ožu 27 21:41:53 archNAS systemd[557]: Starting Syncthing container...
ožu 27 21:41:53 archNAS podman[3182]: 2024-03-27 21:41:53.34531963 +0100 CET m=+0.058399044 container create 13b2ae73af680e6e872a2d2ace147b6cc9bb0db08a5f6b70c05f9105673cef25 (image=lscr.io/linuxserver/syncthing:latest, name=syncthing, org.opencontainers.image.url=https://github.com/linuxserver/docker-syncthing/packages, build_version=Linuxserver.io version:- v1.27.4-ls135 Build-date:- 2024-03-16T01:32:42+00:00, org.opencontainers.image.title=Syncthing, org.opencontainers.image.licenses=GPL-3.0-only, PODMAN_SYSTEMD_UNIT=syncthing-lsio.service, org.opencontainers.image.source=https://github.com/linuxserver/docker-syncthing, org.opencontainers.image.authors=linuxserver.io, org.opencontainers.image.version=v1.27.4-ls135, org.opencontainers.image.description=[Syncthing](https://syncthing.net) replaces proprietary sync and cloud services with something open, trustworthy and decentralized. Your data is your data alone and you deserve to choose where it is stored, if it is shared with some third party and how it's transmitted over the Internet., org.opencontainers.image.vendor=linuxserver.io, org.opencontainers.image.created=2024-03-16T01:32:42+00:00, maintainer=thelamer, org.opencontainers.image.ref.name=596dda3c979579096ebc15569c6eeec9bf2f28b7, io.containers.autoupdate=registry, org.opencontainers.image.documentation=https://docs.linuxserver.io/images/docker-syncthing, org.opencontainers.image.revision=596dda3c979579096ebc15569c6eeec9bf2f28b7)
ožu 27 21:41:53 archNAS podman[3182]: 2024-03-27 21:41:53.318021121 +0100 CET m=+0.031100530 image pull 8c1fb57211237ad802d93ab81737376a4f062a91c295ec2f6884045e93e0b172 lscr.io/linuxserver/syncthing:latest
ožu 27 21:41:53 archNAS podman[3182]: 2024-03-27 21:41:53.425616484 +0100 CET m=+0.138695892 container init 13b2ae73af680e6e872a2d2ace147b6cc9bb0db08a5f6b70c05f9105673cef25 (image=lscr.io/linuxserver/syncthing:latest, name=syncthing, build_version=Linuxserver.io version:- v1.27.4-ls135 Build-date:- 2024-03-16T01:32:42+00:00, maintainer=thelamer, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=syncthing-lsio.service, org.opencontainers.image.source=https://github.com/linuxserver/docker-syncthing, org.opencontainers.image.authors=linuxserver.io, org.opencontainers.image.created=2024-03-16T01:32:42+00:00, org.opencontainers.image.description=[Syncthing](https://syncthing.net) replaces proprietary sync and cloud services with something open, trustworthy and decentralized. Your data is your data alone and you deserve to choose where it is stored, if it is shared with some third party and how it's transmitted over the Internet., org.opencontainers.image.url=https://github.com/linuxserver/docker-syncthing/packages, org.opencontainers.image.title=Syncthing, org.opencontainers.image.documentation=https://docs.linuxserver.io/images/docker-syncthing, org.opencontainers.image.vendor=linuxserver.io, org.opencontainers.image.revision=596dda3c979579096ebc15569c6eeec9bf2f28b7, org.opencontainers.image.version=v1.27.4-ls135, org.opencontainers.image.ref.name=596dda3c979579096ebc15569c6eeec9bf2f28b7, org.opencontainers.image.licenses=GPL-3.0-only)
ožu 27 21:41:53 archNAS podman[3182]: 2024-03-27 21:41:53.429132564 +0100 CET m=+0.142211978 container start 13b2ae73af680e6e872a2d2ace147b6cc9bb0db08a5f6b70c05f9105673cef25 (image=lscr.io/linuxserver/syncthing:latest, name=syncthing, org.opencontainers.image.documentation=https://docs.linuxserver.io/images/docker-syncthing, org.opencontainers.image.source=https://github.com/linuxserver/docker-syncthing, org.opencontainers.image.created=2024-03-16T01:32:42+00:00, org.opencontainers.image.version=v1.27.4-ls135, io.containers.autoupdate=registry, org.opencontainers.image.description=[Syncthing](https://syncthing.net) replaces proprietary sync and cloud services with something open, trustworthy and decentralized. Your data is your data alone and you deserve to choose where it is stored, if it is shared with some third party and how it's transmitted over the Internet., org.opencontainers.image.title=Syncthing, PODMAN_SYSTEMD_UNIT=syncthing-lsio.service, org.opencontainers.image.vendor=linuxserver.io, org.opencontainers.image.ref.name=596dda3c979579096ebc15569c6eeec9bf2f28b7, maintainer=thelamer, org.opencontainers.image.licenses=GPL-3.0-only, org.opencontainers.image.authors=linuxserver.io, org.opencontainers.image.revision=596dda3c979579096ebc15569c6eeec9bf2f28b7, build_version=Linuxserver.io version:- v1.27.4-ls135 Build-date:- 2024-03-16T01:32:42+00:00, org.opencontainers.image.url=https://github.com/linuxserver/docker-syncthing/packages)
ožu 27 21:41:53 archNAS systemd[557]: Started Syncthing container.
ožu 27 21:41:53 archNAS syncthing-lsio[3182]: 13b2ae73af680e6e872a2d2ace147b6cc9bb0db08a5f6b70c05f9105673cef25

Focus is on syncthing-lsio.service: Scheduled restart job, restart counter is at 18. And why is log filled with failed systemd and passt.avx2.

After more reading i found about slirp4netns and adding to jellyfin Network=slirp4netns and removing StartLimitBurst=0

● jellyfin.service - Jellyfin container
     Loaded: loaded (/home/matija/.config/containers/systemd/jellyfin.container; generated)
     Active: active (running) since Wed 2024-03-27 22:50:42 CET; 1min 19s ago
   Main PID: 954 (conmon)
      Tasks: 41 (limit: 27397)
     Memory: 1011.8M (peak: 1.5G)
        CPU: 10.478s
     CGroup: /user.slice/user-1000.slice/user@1000.service/app.slice/jellyfin.service
             ├─libpod-payload-3f9dfeceb8f81e9b72a3b11e763dabdfa4a6df0589d096252c0c26005e099624
             │ └─959 /jellyfin/jellyfin
             └─runtime
               ├─921 /usr/bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp --enable-ipv6 -c -r 3 -e 4 --netns-type=path /run/user/1000/netns/netns-dcd01af8-1f75-15b8-5355-4d368a7315f6 tap0
               ├─925 rootlessport
               ├─937 rootlessport-child
               └─954 /usr/bin/conmon --api-version 1 -c 3f9dfeceb8f81e9b72a3b11e763dabdfa4a6df0589d096252c0c26005e099624 -u 3f9dfeceb8f81e9b72a3b11e763dabdfa4a6df0589d096252c0c26005e099624 -r /usr/bin/crun -b /home/matija/.local/share/containers/storage/overlay-containers/3f9dfeceb8f81e9b72a3b11e763dabdfa4a6df0589d096252c0c26005e099624/userdata -p /run/user/1000/containers/overlay-containers/3f9dfeceb8f81e9b72a3b11e763dabdfa4a6df0589d096252c0c26005e099624/userdata/pidfile -n systemd-jellyfin --exit-dir /run/user/1000/libpod/tmp/exits --persist-dir /run/user/1000/libpod/tmp/persist/3f9dfeceb8f81e9b72a3b11e763dabdfa4a6df0589d096252c0c26005e099624 --full-attach -l none --log-level warning --syslog --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/run/user/1000/containers/overlay-containers/3f9dfeceb8f81e9b72a3b11e763dabdfa4a6df0589d096252c0c26005e099624/userdata/oci-log --conmon-pidfile /run/user/1000/containers/overlay-containers/3f9dfeceb8f81e9b72a3b11e763dabdfa4a6df0589d096252c0c26005e099624/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/matija/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg warning --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg "" --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /home/matija/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg 3f9dfeceb8f81e9b72a3b11e763dabdfa4a6df0589d096252c0c26005e099624

ožu 27 22:50:42 archNAS systemd[546]: Starting Jellyfin container...
ožu 27 22:50:42 archNAS podman[790]: 2024-03-27 22:50:42.679332266 +0100 CET m=+0.097476705 container create 3f9dfeceb8f81e9b72a3b11e763dabdfa4a6df0589d096252c0c26005e099624 (image=docker.io/jellyfin/jellyfin:latest, name=systemd-jellyfin, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=jellyfin.service)
ožu 27 22:50:42 archNAS podman[790]: 2024-03-27 22:50:42.652482921 +0100 CET m=+0.070627377 image pull 544d674913bc396256f62e1540b88bfa0ed49714b941007c658e04018dea36da docker.io/jellyfin/jellyfin:latest
ožu 27 22:50:42 archNAS podman[790]: 2024-03-27 22:50:42.79049041 +0100 CET m=+0.208634850 container init 3f9dfeceb8f81e9b72a3b11e763dabdfa4a6df0589d096252c0c26005e099624 (image=docker.io/jellyfin/jellyfin:latest, name=systemd-jellyfin, PODMAN_SYSTEMD_UNIT=jellyfin.service, io.containers.autoupdate=registry)
ožu 27 22:50:42 archNAS systemd[546]: Started Jellyfin container.
ožu 27 22:50:42 archNAS podman[790]: 2024-03-27 22:50:42.80117532 +0100 CET m=+0.219319763 container start 3f9dfeceb8f81e9b72a3b11e763dabdfa4a6df0589d096252c0c26005e099624 (image=docker.io/jellyfin/jellyfin:latest, name=systemd-jellyfin, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=jellyfin.service)
ožu 27 22:50:42 archNAS jellyfin[790]: 3f9dfeceb8f81e9b72a3b11e763dabdfa4a6df0589d096252c0c26005e099624

It started normaly with no errors in log

Steps to reproduce the issue

Steps to reproduce the issue

  1. Add a quadlet starting manually it works.
  2. Reboot then wont start, manually works.
  3. Adding StartLimitBurst=0 starts automatically with lots of errors
  4. Adding Network=slirp4netns starts auto with no errors in logs

Describe the results you received

Quadlet wont start automaticly but manualy works without changing default settings. Some pasta errors and containers exit after 6 trys

Describe the results you expected

Containers just starting.

podman info output

host:
  arch: amd64
  buildahVersion: 1.35.1
  cgroupControllers:
  - cpu
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: /usr/bin/conmon is owned by conmon 1:2.1.10-1
    path: /usr/bin/conmon
    version: 'conmon version 2.1.10, commit: 2dcd736e46ded79a53339462bc251694b150f870'
  cpuUtilization:
    idlePercent: 73.04
    systemPercent: 9.41
    userPercent: 17.55
  cpus: 8
  databaseBackend: sqlite
  distribution:
    distribution: arch
    version: unknown
  eventLogger: journald
  freeLocks: 2045
  hostname: archNAS
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 6.8.1-arch1-1
  linkmode: dynamic
  logDriver: journald
  memFree: 13359452160
  memTotal: 23965880320
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: /usr/lib/podman/aardvark-dns is owned by aardvark-dns 1.10.0-1
      path: /usr/lib/podman/aardvark-dns
      version: aardvark-dns 1.10.0
    package: /usr/lib/podman/netavark is owned by netavark 1.10.3-1
    path: /usr/lib/podman/netavark
    version: netavark 1.10.3
  ociRuntime:
    name: crun
    package: /usr/bin/crun is owned by crun 1.14.4-1
    path: /usr/bin/crun
    version: |-
      crun version 1.14.4
      commit: a220ca661ce078f2c37b38c92e66cf66c012d9c1
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: /usr/bin/pasta is owned by passt 2024_03_26.4988e2b-1
    version: |
      pasta 2024_03_26.4988e2b
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /etc/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: /usr/bin/slirp4netns is owned by slirp4netns 1.2.3-1
    version: |-
      slirp4netns version 1.2.3
      commit: c22fde291bb35b354e6ca44d13be181c76a0a432
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.5
  swapFree: 10485755904
  swapTotal: 10485755904
  uptime: 0h 6m 54.00s
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries: {}
store:
  configFile: /home/matija/.config/containers/storage.conf
  containerStore:
    number: 3
    paused: 0
    running: 3
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/matija/.local/share/containers/storage
  graphRootAllocated: 490578059264
  graphRootUsed: 170248470528
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 4
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /home/matija/.local/share/containers/storage/volumes
version:
  APIVersion: 5.0.0
  Built: 1711060217
  BuiltTime: Thu Mar 21 23:30:17 2024
  GitCommit: e71ec6f1d94d2d97fb3afe08aae0d8adaf8bddf0-dirty
  GoVersion: go1.22.1
  Os: linux
  OsArch: linux/amd64
  Version: 5.0.0

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

Bare metal PC

Additional information

I am sorry if this post is bad and or redundant, cant find on google my problem but was able to mitigate it somewhat by myself so maybe it will help someone. If you need more information just ask.

sbrivio-rh commented 7 months ago

@quietsoviet, thanks for reporting this. Can you please share what addresses (ip address show) and IPv4 routes (ip route show) you have configured on the host, so that I can have a try at reproducing this? pasta attempts (by default) to duplicate routes to the container, and something seems to be going wrong with it.

quietsoviet commented 7 months ago

I still have my docker-compose while is slowly transition to rootless podman so i have lots of veth* networks ip address show is

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: enp0s31f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 70:85:c2:30:ef:6f brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.100/24 brd 192.168.1.255 scope global dynamic noprefixroute enp0s31f6
       valid_lft 85763sec preferred_lft 85763sec
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:72:72:99 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.1/24 brd 192.168.100.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: br-a9335a4dec4e: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ca:bc:c5:2b brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global br-a9335a4dec4e
       valid_lft forever preferred_lft forever
    inet6 fe80::42:caff:febc:c52b/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
5: br-ccc3d2379183: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:e5:2e:19:7e brd ff:ff:ff:ff:ff:ff
    inet 192.168.90.1/24 brd 192.168.90.255 scope global br-ccc3d2379183
       valid_lft forever preferred_lft forever
    inet6 fe80::42:e5ff:fe2e:197e/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
6: br-e73b03ad369a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:0f:3d:9e:6a brd ff:ff:ff:ff:ff:ff
    inet 192.168.92.1/24 brd 192.168.92.255 scope global br-e73b03ad369a
       valid_lft forever preferred_lft forever
    inet6 fe80::42:fff:fe3d:9e6a/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
7: br-6c1de2611a72: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:12:75:a7:94 brd ff:ff:ff:ff:ff:ff
    inet 192.168.94.1/24 brd 192.168.94.255 scope global br-6c1de2611a72
       valid_lft forever preferred_lft forever
    inet6 fe80::42:12ff:fe75:a794/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
8: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:95:0d:a7:61 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
9: br-7ec040365333: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:d2:68:ce:7d brd ff:ff:ff:ff:ff:ff
    inet 192.168.91.1/24 brd 192.168.91.255 scope global br-7ec040365333
       valid_lft forever preferred_lft forever
    inet6 fe80::42:d2ff:fe68:ce7d/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
10: br-a6075bab6128: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:33:5b:94:46 brd ff:ff:ff:ff:ff:ff
    inet 192.168.93.1/24 brd 192.168.93.255 scope global br-a6075bab6128
       valid_lft forever preferred_lft forever
    inet6 fe80::42:33ff:fe5b:9446/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
12: veth5f89019@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-7ec040365333 state UP group default 
    link/ether ba:b4:e1:7c:d3:46 brd ff:ff:ff:ff:ff:ff link-netnsid 4
    inet6 fe80::b8b4:e1ff:fe7c:d346/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
14: veth674f72c@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-e73b03ad369a state UP group default 
    link/ether a2:2c:f4:c7:ad:1d brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet6 fe80::a02c:f4ff:fec7:ad1d/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
16: veth10eb9d2@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-a9335a4dec4e state UP group default 
    link/ether a6:7a:71:41:2c:8d brd ff:ff:ff:ff:ff:ff link-netnsid 9
    inet6 fe80::a47a:71ff:fe41:2c8d/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
18: veth4f77d25@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-e73b03ad369a state UP group default 
    link/ether de:09:7c:cb:2d:9a brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::dc09:7cff:fecb:2d9a/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
20: veth9bd0415@if19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-a6075bab6128 state UP group default 
    link/ether ba:66:76:29:2f:49 brd ff:ff:ff:ff:ff:ff link-netnsid 10
    inet6 fe80::b866:76ff:fe29:2f49/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
22: veth3ab0cab@if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-6c1de2611a72 state UP group default 
    link/ether 9a:6b:3a:0e:bd:8d brd ff:ff:ff:ff:ff:ff link-netnsid 7
    inet6 fe80::986b:3aff:fe0e:bd8d/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
24: veth77e9ad9@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ccc3d2379183 state UP group default 
    link/ether 5e:ee:e3:7a:cc:93 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::5cee:e3ff:fe7a:cc93/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
26: veth61809cc@if25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-7ec040365333 state UP group default 
    link/ether aa:4d:e5:99:3d:18 brd ff:ff:ff:ff:ff:ff link-netnsid 24
    inet6 fe80::a84d:e5ff:fe99:3d18/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
28: veth6147126@if27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-e73b03ad369a state UP group default 
    link/ether 9a:19:76:94:4c:b8 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::9819:76ff:fe94:4cb8/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
30: veth45ca321@if29: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-a9335a4dec4e state UP group default 
    link/ether 02:58:fa:a3:04:b4 brd ff:ff:ff:ff:ff:ff link-netnsid 11
    inet6 fe80::58:faff:fea3:4b4/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
32: veth7208511@if31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-7ec040365333 state UP group default 
    link/ether 4e:67:25:bf:85:39 brd ff:ff:ff:ff:ff:ff link-netnsid 22
    inet6 fe80::4c67:25ff:febf:8539/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
34: veth5df184a@if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-a6075bab6128 state UP group default 
    link/ether 8e:67:41:c0:5b:26 brd ff:ff:ff:ff:ff:ff link-netnsid 5
    inet6 fe80::8c67:41ff:fec0:5b26/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
36: veth6fbd949@if35: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ccc3d2379183 state UP group default 
    link/ether 52:5f:7d:e2:4b:be brd ff:ff:ff:ff:ff:ff link-netnsid 13
    inet6 fe80::505f:7dff:fee2:4bbe/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
38: vethf468bab@if37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-e73b03ad369a state UP group default 
    link/ether 8e:19:3d:f1:a0:65 brd ff:ff:ff:ff:ff:ff link-netnsid 8
    inet6 fe80::8c19:3dff:fef1:a065/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
40: veth45a4991@if39: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-a9335a4dec4e state UP group default 
    link/ether 7a:af:e8:1e:2e:dd brd ff:ff:ff:ff:ff:ff link-netnsid 6
    inet6 fe80::78af:e8ff:fe1e:2edd/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
42: vethe0dedf1@if41: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-a6075bab6128 state UP group default 
    link/ether aa:e5:a4:51:cd:a5 brd ff:ff:ff:ff:ff:ff link-netnsid 23
    inet6 fe80::a8e5:a4ff:fe51:cda5/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
44: veth6e0012c@if43: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-6c1de2611a72 state UP group default 
    link/ether 76:3a:49:3a:c8:c2 brd ff:ff:ff:ff:ff:ff link-netnsid 12
    inet6 fe80::743a:49ff:fe3a:c8c2/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
46: veth9de9307@if45: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ccc3d2379183 state UP group default 
    link/ether fa:00:44:56:f7:a8 brd ff:ff:ff:ff:ff:ff link-netnsid 19
    inet6 fe80::f800:44ff:fe56:f7a8/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
48: veth09e6cfe@if47: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-a9335a4dec4e state UP group default 
    link/ether 7a:e4:28:17:85:42 brd ff:ff:ff:ff:ff:ff link-netnsid 15
    inet6 fe80::78e4:28ff:fe17:8542/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
50: vethd833738@if49: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ccc3d2379183 state UP group default 
    link/ether de:11:bb:7b:a9:dd brd ff:ff:ff:ff:ff:ff link-netnsid 17
    inet6 fe80::dc11:bbff:fe7b:a9dd/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
52: vethd100eeb@if51: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-6c1de2611a72 state UP group default 
    link/ether 0a:c7:93:12:52:78 brd ff:ff:ff:ff:ff:ff link-netnsid 26
    inet6 fe80::8c7:93ff:fe12:5278/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
54: veth95d141f@if53: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ccc3d2379183 state UP group default 
    link/ether a2:ca:3f:8d:84:b0 brd ff:ff:ff:ff:ff:ff link-netnsid 18
    inet6 fe80::a0ca:3fff:fe8d:84b0/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
56: veth46ea203@if55: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ccc3d2379183 state UP group default 
    link/ether 26:cd:88:87:7f:44 brd ff:ff:ff:ff:ff:ff link-netnsid 14
    inet6 fe80::24cd:88ff:fe87:7f44/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
58: vethe4bef8c@if57: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ccc3d2379183 state UP group default 
    link/ether 2e:4d:40:d9:29:ca brd ff:ff:ff:ff:ff:ff link-netnsid 16
    inet6 fe80::2c4d:40ff:fed9:29ca/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
60: veth8d56387@if59: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ccc3d2379183 state UP group default 
    link/ether 6e:54:70:a6:c5:30 brd ff:ff:ff:ff:ff:ff link-netnsid 20
    inet6 fe80::6c54:70ff:fea6:c530/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
62: vethe3347f5@if61: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ccc3d2379183 state UP group default 
    link/ether 46:29:4c:78:75:2e brd ff:ff:ff:ff:ff:ff link-netnsid 21
    inet6 fe80::4429:4cff:fe78:752e/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
64: veth73dab0b@if63: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ccc3d2379183 state UP group default 
    link/ether ca:58:f9:88:31:b0 brd ff:ff:ff:ff:ff:ff link-netnsid 25
    inet6 fe80::c858:f9ff:fe88:31b0/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
66: veth45969bf@if65: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ccc3d2379183 state UP group default 
    link/ether 1a:68:01:bf:f4:a5 brd ff:ff:ff:ff:ff:ff link-netnsid 22
    inet6 fe80::1868:1ff:febf:f4a5/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
68: veth340222d@if67: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ccc3d2379183 state UP group default 
    link/ether da:43:d4:7f:f9:69 brd ff:ff:ff:ff:ff:ff link-netnsid 24
    inet6 fe80::d843:d4ff:fe7f:f969/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
70: vethda7aa04@if69: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ccc3d2379183 state UP group default 
    link/ether 8e:18:e9:63:a7:4b brd ff:ff:ff:ff:ff:ff link-netnsid 23
    inet6 fe80::8c18:e9ff:fe63:a74b/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
72: veth1d738ca@if71: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ccc3d2379183 state UP group default 
    link/ether ee:98:13:28:df:3d brd ff:ff:ff:ff:ff:ff link-netnsid 26
    inet6 fe80::ec98:13ff:fe28:df3d/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever

and ip route show

default via 192.168.1.1 dev enp0s31f6 proto dhcp src 192.168.1.100 metric 100 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
172.18.0.0/16 dev br-a9335a4dec4e proto kernel scope link src 172.18.0.1 
192.168.1.0/24 dev enp0s31f6 proto kernel scope link src 192.168.1.100 metric 100 
192.168.90.0/24 dev br-ccc3d2379183 proto kernel scope link src 192.168.90.1 
192.168.91.0/24 dev br-7ec040365333 proto kernel scope link src 192.168.91.1 
192.168.92.0/24 dev br-e73b03ad369a proto kernel scope link src 192.168.92.1 
192.168.93.0/24 dev br-a6075bab6128 proto kernel scope link src 192.168.93.1 
192.168.94.0/24 dev br-6c1de2611a72 proto kernel scope link src 192.168.94.1 
192.168.100.0/24 dev virbr0 proto kernel scope link src 192.168.100.1 linkdown

br* 192.168.90 to 94 are docker bridges, as is br with 172.18.0.0. virbr(192.168.1.100) is virt-manager bridge.

sbrivio-rh commented 7 months ago

As an additional test, is pasta itself (without Podman) able to start on your host and configure network interfaces? You could try simply issuing:

pasta --config-net

and see if it starts, and if the network namespace you're now in has addresses and routes configured (they should approximate enp0s31f6 from your host).

quietsoviet commented 7 months ago

pasta --config-net yields

Multiple interfaces with IPv6 routes, use -i to select one
Couldn't pick external interface: disabling IPv6

enp0s31f6 is controlled by Network-manager and i had disabled IPv6, it was causing some headache on system.

After some time pasta stops giving errors and container will start and able to connect to internet, but only with StartLimitBurst=0 or manual start.

sbrivio-rh commented 7 months ago

pasta --config-net yields

Multiple interfaces with IPv6 routes, use -i to select one
Couldn't pick external interface: disabling IPv6

Okay, and is IPv4 set up in the new network namespace (and shell) you're in, now? Can you check with ip address show and ip route show?

Luap99 commented 7 months ago

Given it starts eventually my assumption would be that the host network is not fully set up when the quadlet unit is started. So while pasta is tryting to figure out the interface adrresses/routes it is possible that the settings change at the same time on the host possibly causing issues for pasta.

quietsoviet commented 7 months ago

It trow me in root and this is result

[matija@archNAS ~]$ pasta --config-net
Multiple interfaces with IPv6 routes, use -i to select one
Couldn't pick external interface: disabling IPv6
[root@archNAS ~]# ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host proto kernel_lo 
       valid_lft forever preferred_lft forever
2: enp0s31f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65520 qdisc fq_codel state UNKNOWN group default qlen 1000
    link/ether 72:4f:6a:c1:64:77 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.100/24 brd 192.168.1.255 scope global noprefixroute enp0s31f6
       valid_lft forever preferred_lft forever
    inet6 fe80::704f:6aff:fec1:6477/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
[root@archNAS ~]# ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host proto kernel_lo 
       valid_lft forever preferred_lft forever
2: enp0s31f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65520 qdisc fq_codel state UNKNOWN group default qlen 1000
    link/ether 72:4f:6a:c1:64:77 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.100/24 brd 192.168.1.255 scope global noprefixroute enp0s31f6
       valid_lft forever preferred_lft forever
    inet6 fe80::704f:6aff:fec1:6477/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
[root@archNAS ~]# ip route show
default via 192.168.1.1 dev enp0s31f6 proto dhcp metric 100 
192.168.1.0/24 dev enp0s31f6 proto kernel scope link metric 100
quietsoviet commented 7 months ago

I tried with removing linger, because this is second PC and i have monitor connected to it with autologin. Still pasta isn't finished and containers wont start. Then i added ExecStartPre=/bin/sleep 30 and removed StartLimitBurst=0 and Network=slirp4netns and it starts normally with delay. @Luap99 is right most likely is some network problem. Before i had bridge network as main and enp0s31f6 as slave but removed that, and i use systemd-resolved with avahi masked or disabled. Sorry for being bad but this PC is running mostly smoothly for 7 years now. I really don't remember what i did change in that time spawn. If you

sbrivio-rh commented 7 months ago

It trow me in root and this is result

Thankfully, that's not actual root ;) but it looks like that -- it's UID 0 in a detached user namespace.

[root@archNAS ~]# ip route show
default via 192.168.1.1 dev enp0s31f6 proto dhcp metric 100 
192.168.1.0/24 dev enp0s31f6 proto kernel scope link metric 100

...so, as expected, also if you start pasta manually at some later point it works.

Given it starts eventually my assumption would be that the host network is not fully set up when the quadlet unit is started. So while pasta is tryting to figure out the interface adrresses/routes it is possible that the settings change at the same time on the host possibly causing issues for pasta.

Right, I also think this is essentially the same as https://github.com/containers/podman/issues/22197.

I guess the most obvious solution is to add preconditions to the systemd services so that initial networking setup is guaranteed to be done before quadlet units can start.

On the other hand, we might have ways to make pasta more robust, but I don't have a concrete idea yet -- suggestions warmly welcome.

quietsoviet commented 7 months ago

Thank you for replaying i did some scouring on internet and added

ExecStartPre=/bin/sleep 20

to my quadlets so they delay on boot and start normaly with no errors. Only in convince is that when i start them manually they delay. But it works in rootless podman. I will close this issue ant thank you for your help

gdonval commented 7 months ago

Thank you for replaying i did some scouring on internet and added

That is god-tier foot-gun that. That's not resolution, that's a ugly hack that is gonna generate many heisenbugs in the future.

We need pasta to wait for the network to be set.

sbrivio-rh commented 7 months ago

Thank you for replaying i did some scouring on internet and added

That is god-tier foot-gun that. That's not resolution, that's a ugly hack that is gonna generate many heisenbugs in the future.

Thanks for your constructive commentary, Gaël. Note, though, that nobody tried to sell that as a resolution for this issue -- it's simply a workaround. This issue is being tracked as https://github.com/containers/podman/issues/22197, and that's why this one is closed, it's a duplicate.

We need pasta to wait for the network to be set.

This should be done externally, because you have essentially the same issue with any other network mode, including slirp4netns. The container with slirp4netns will start, but until the host network is fully set up, connectivity will be broken, or worse, one might be sending traffic with incomplete firewalling rules or with inconsistent routing.

See also https://github.com/systemd/systemd/issues/3312#issuecomment-2039973852 -- and mind that contributions, including yours, are warmly welcome.

gdonval commented 7 months ago

it's simply a workaround.

No. It's not. It's sloppiness and incompetence. And it's even less acceptable when it's actively used as an excuse not to revert a broken commit.

Being referred to as a workaround in different places makes it even worse... People will keep that 20s sleep in their units for ages after it's fixed or will be faced with heisenbugs in the meantime when they deploy their units on a less powerful computer or if something is going on with their startup process.

But the most appalling part is that no one in this project is apparently capable of simply saying "we've tried, CI/CD does not pass because we have a race condition here, let's postpone the change until the rest of the environment catches up with us". Instead you decided to take the very "constructive" approach to ship known-broken product anyway.

BTW, today we decided to renew our Ubuntu Pro licenses and we'll keep using Docker too. As the person pushing Fedora CoreOS and Podman, trust me, I am not pleased at all but that's the consequence of that sloppiness.

sbrivio-rh commented 7 months ago

it's simply a workaround.

No. It's not. It's sloppiness and incompetence. And it's even less acceptable when it's actively used as an excuse not to revert a broken commit.

Again: the issue exists with slirp4netns and other network modes, too. It's not about reverting a broken commit. With slirp4netns, containers will start, and users are even less likely to notice about any issue of this sort (speaking of heisenbugs...).

Looking at typical systemd units for usage with Docker Compose, I'm fairly sure you would face similar issues with them.

But the most appalling part is that podman is apparently incapable of simply saying "we've tried, CI/CD does not pass because we have a race condition here, let's postpone the change until the rest of the environment catches up with us". Instead you decided to take the very "constructive" approach to ship known-broken product anyway.

Rest assured that nobody here is knowingly shipping broken things. Continuous integration tests passed because, to reproduce this, you need a setup that makes network initialisation somewhat slow at boot, and that wasn't the case for the test environments at hand.