containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
23.71k stars 2.41k forks source link

publishing a port fails 50% of the time in nextcloud container after a restart of machine #18746

Closed basilrabi closed 1 year ago

basilrabi commented 1 year ago

Issue Description

50% of the time, after I restart my server, the published port of a container is not accessible. I use three containers for my nextcloud instance, below is the script:

podman network create nextcloud-net

podman run --detach \
  --env MYSQL_DATABASE=$DB_NAME \
  --env MYSQL_USER=$DB_USER \
  --env MYSQL_PASSWORD=$DB_USER_PASSWORD \
  --env MYSQL_ROOT_PASSWORD=$DB_ROOT_PASSWORD \
  -v "$VOLUME_NEXTCLOUD_DB:/var/lib/mysql:z" \
  --network nextcloud-net \
  --restart on-failure \
  --name nextcloud-db \
  docker.io/library/mariadb:latest

podman run --detach \
  --env MYSQL_HOST=nextcloud-db.dns.podman \
  --env MYSQL_DATABASE=$DB_NAME \
  --env MYSQL_USER=$DB_USER \
  --env MYSQL_PASSWORD=$DB_USER_PASSWORD \
  --env NEXTCLOUD_ADMIN_USER=$NC_ADMIN \
  --env NEXTCLOUD_ADMIN_PASSWORD=$NC_PASSWORD \
  --env NEXTCLOUD_TRUSTED_DOMAINS="$NC_TRUSTED_DOMAINS" \
  --env PHP_MEMORY_LIMIT="$PHP_MEMORY_LIMIT" \
  --env PHP_UPLOAD_LIMIT="$PHP_UPLOAD_LIMIT" \
  --env TRUSTED_PROXIES="$NC_TRUSTED_PROXIES" \
  --env OVERWRITEPROTOCOL=https \
  -v "$VOLUME_NEXTCLOUD_APP:/var/www/html:z" \
  -v "$VOLUME_NEXTCLOUD_DATA:/var/www/html/data:z" \
  -v "$VOLUME_NEXTCLOUD_LOG:/var/log:z" \
  --network nextcloud-net \
  --restart on-failure \
  --name nextcloud \
  --publish 8080:80 \
  docker.io/library/nextcloud

podman run --detach \
  --publish 9980:9980 \
  --network nextcloud-net \
  --name code \
  --privileged \
  -e "extra_params=--o:ssl.enable=false --o:ssl.termination=true --o:net.post_allow.host[0]=.+ --o:storage.wopi.host[0]=.+" \
  docker.io/collabora/code

generate_service="podman generate systemd --start-timeout=10 --restart-sec=10"
$generate_service nextcloud-db > /etc/systemd/system/nextcloud-db.service
$generate_service nextcloud > /etc/systemd/system/nextcloud-app.service
$generate_service code > /etc/systemd/system/code.service

I can access the collabora online containers 100% of the time. The only issue I'm having is the nextcloud container which is not accessible via port 8080 50% of the time. The nextcloud container is running and I can access it directly using the container ip address.

podman version
Client:       Podman Engine
Version:      4.5.1
API Version:  4.5.1
Go Version:   go1.19.9
Built:        Sat May 27 01:58:29 2023
OS/Arch:      linux/amd64

rpm -q podman
podman-4.5.1-1.fc37.x86_64

Steps to reproduce the issue

Steps to reproduce the issue

  1. restart the machine which hosts perfectly running containers

Describe the results you received

The nextcloud container cannot be accessed via its published port 50% of the time.

Describe the results you expected

The nextcloud container can be accessed via its published port 100% of the time.

podman info output

host:
  arch: amd64
  buildahVersion: 1.30.0
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  - misc
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.7-2.fc37.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.7, commit: '
  cpuUtilization:
    idlePercent: 99.18
    systemPercent: 0.34
    userPercent: 0.48
  cpus: 16
  databaseBackend: boltdb
  distribution:
    distribution: fedora
    variant: server
    version: "37"
  eventLogger: journald
  hostname: datamanagement
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 6.2.15-200.fc37.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 44962922496
  memTotal: 67411267584
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: crun-1.8.5-1.fc37.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.8.5
      commit: b6f80f766c9a89eb7b1440c0a70ab287434b17ed
      rundir: /run/user/0/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-8.fc37.x86_64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 8589930496
  swapTotal: 8589930496
  uptime: 10h 30m 26.00s (Approximately 0.42 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /usr/share/containers/storage.conf
  containerStore:
    number: 3
    paused: 0
    running: 3
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphRootAllocated: 2197878210560
  graphRootUsed: 331961233408
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 5
  runRoot: /run/containers/storage
  transientStore: false
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 4.5.1
  Built: 1685123909
  BuiltTime: Sat May 27 01:58:29 2023
  GitCommit: ""
  GoVersion: go1.19.9
  Os: linux
  OsArch: linux/amd64
  Version: 4.5.1

Podman in a container

No

Privileged Or Rootless

Privileged

Upstream Latest Release

No

Additional environment details

NONE

Additional information

podman ps says:

CONTAINER ID  IMAGE                                  COMMAND               CREATED        STATUS        PORTS                   NAMES
6123afe2c3aa  localhost/mariadb:2023-05-31           mariadbd              8 seconds ago  Up 8 seconds                          nextcloud-db
7a0418955464  localhost/nextcloud:ffmpeg-2023-05-31  apache2-foregroun...  7 seconds ago  Up 7 seconds  0.0.0.0:8080->80/tcp    nextcloud
90f7c7e11b98  localhost/code:fonts-2023-05-31        /start-collabora-...  6 seconds ago  Up 7 seconds  0.0.0.0:9980->9980/tcp  code

while podman logs nextcloud says:

AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.89.0.9. Set the 'ServerName' directive globally to suppress this message

What is written in podman logs is not related to my issue and also appears when I can access the container via its published port.

What is interesting is when I check the ports via ss, the published port for the collabora online container (9980) is visible while the published port for the nextcloud container is not.

ss -lp | grep 9980
tcp   LISTEN 0      4096                                                             0.0.0.0:9980                       0.0.0.0:*    users:(("conmon",pid=147518,fd=5)) 

ss -lp | grep 8080
# EMPTY

After around 10 iterations of deleting all containers and recreating them, the published port of 8080 can be accessed. How do I debug this?

Luap99 commented 1 year ago

The fact that you do not even see the port bound in ss is very confusion to me.

Please check the full unit logs with journalctl -u nextcloud-app.service that should show us if they were any errors logged. If that doesn't show anything please add --log-level debug to the commands in the systemd unit and check the logs again.


Also you shouldn't use --restart ... when running podman via systemd and instead rely on the systemd restart policy (podman generate systemd --restart-policy ...) However I don't think that should matter for your issue here.

basilrabi commented 1 year ago

Here is the output with the debug:

systemd[1]: Starting nextcloud-app.service - Podman container-8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44.service...
podman[555722]: time="2023-06-01T15:51:11+08:00" level=info msg="/usr/bin/podman filtering at log level debug"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="Called start.PersistentPreRunE(/usr/bin/podman start 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 --log-level debug)"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="Using conmon: \"/usr/bin/conmon\""
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="Using graph driver overlay"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="Using graph root /var/lib/containers/storage"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="Using run root /run/containers/storage"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="Using tmp dir /run/libpod"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="Using transient store: false"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="[graphdriver] trying provided driver \"overlay\""
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="Cached value indicated that overlay is supported"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="Cached value indicated that overlay is supported"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="Cached value indicated that metacopy is being used"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="Cached value indicated that native-diff is not being used"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="Initializing event backend journald"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\""
podman[555722]: time="2023-06-01T15:51:11+08:00" level=info msg="Setting parallel job count to 49"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="Cached value indicated that idmapped mounts for overlay are supported"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="Made network namespace at /run/netns/netns-d87c3270-d48a-7673-96b8-c61f0ff28c24 for container 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44"
podman[555722]: time="2023-06-01T15:51:11+08:00" level=debug msg="overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/BLKHWGEIQUXMA6OQVR576AGIQ5:/var/lib/containers/storage/overlay/l/6MDDPR4UMJXCB4S2ZT7CBWO4XC:/var/lib/containers/storage/overlay/l/K2E4QGFFVFWO3E7YEUICWCVGYB:/var/lib/containers/storage/overlay/l/LYT3IWSB3NUOIP37VVEA7QX4Q2:/var/lib/containers/storage/overlay/l/SSF5R4MQ5RQSYZMNDX7WTOK5ER:/var/lib/containers/storage/overlay/l/3JSPD5ATWK5DSLJEKDK7P6L7LP:/var/lib/containers/storage/overlay/l/WUJLU5YPY3GG66AWWAD5C7H4MQ:/var/lib/containers/storage/overlay/l/DJJPNOFELWER7YGUJVRUQFQDB2:/var/lib/containers/storage/overlay/l/OY6KYKDZG6FEQWCDKF4IMYSRMW:/var/lib/containers/storage/overlay/l/OV36AVSN3JLKGPH3OHMLZBBYRU:/var/lib/containers/storage/overlay/l/EM5DBPVHIAEGGD3T7I7NUST4LT:/var/lib/containers/storage/overlay/l/TABUIF5ZLTCPIEMTQMJWQGX3SX:/var/lib/containers/storage/overlay/l/4KYFYT2D4EWG4BVJZY5TBF3MC7:/var/lib/containers/storage/overlay/l/C343GGUASW5ZMAJJPRKFT57SOP:/var/lib/containers/storage/overlay/l/QZGTGUIJSC5PPYUFP4ADRJQH3U:/var/lib/containers/storage/overlay/l/OY7IEEOLIVWQIAZNY4BHY6G2DI:/var/lib/containers/storage/overlay/l/XPKN6YQAIAPRYIOE6UAGIXYJ7Q:/var/lib/containers/storage/overlay/l/TNUUPP25IXLAZLGLPPJI4XFGVO:/var/lib/containers/storage/overlay/l/JVMSDHMTUFNNQGW333XXRSRQKP:/var/lib/containers/storage/overlay/l/VUURUXEPAHHAUB54JKHGBHZP4Z:/var/lib/containers/storage/overlay/l/FCQ36HISR7UV4VUAPQ5UO3NGZY,upperdir=/var/lib/containers/storage/overlay/e9b752e4f1145d53abedfe87b381d3b7ff63437d1ef893c551b556fa2459344c/diff,workdir=/var/lib/containers/storage/overlay/e9b752e4f1145d53abedfe87b381d3b7ff63437d1ef893c551b556fa2459344c/work,nodev,metacopy=on,context=\"system_u:object_r:container_file_t:s0:c65,c795\""
podman[555722]: time="2023-06-01T15:51:12+08:00" level=debug msg="Successfully loaded network nextcloud-net: &{nextcloud-net ed82155c4b467840190f734a8cea1e1e786831fd18243299e668510c20ba37b7 bridge podman1 2023-05-31 11:53:17.709174091 +0800 PST [{{{10.89.0.0 ffffff00}} 10.89.0.1 <nil>}] false false true [] map[] map[] map[driver:host-local]}"
podman[555722]: time="2023-06-01T15:51:12+08:00" level=debug msg="Successfully loaded 2 networks"
podman[555722]: time="2023-06-01T15:51:12+08:00" level=debug msg="Mounted container \"8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44\" at \"/var/lib/containers/storage/overlay/e9b752e4f1145d53abedfe87b381d3b7ff63437d1ef893c551b556fa2459344c/merged\""
podman[555722]: time="2023-06-01T15:51:12+08:00" level=debug msg="Created root filesystem for container 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 at /var/lib/containers/storage/overlay/e9b752e4f1145d53abedfe87b381d3b7ff63437d1ef893c551b556fa2459344c/merged"
podman[555736]: [DEBUG netavark::network::validation] "Validating network namespace..."
podman[555736]: [DEBUG netavark::commands::setup] "Setting up..."
podman[555736]: [INFO  netavark::firewall] Using iptables firewall driver
podman[555736]: [DEBUG netavark::network::bridge] Setup network nextcloud-net
podman[555736]: [DEBUG netavark::network::bridge] Container interface name: eth0 with IP addresses [10.89.0.23/24]
podman[555736]: [DEBUG netavark::network::bridge] Bridge name: podman1 with IP addresses [10.89.0.1/24]
podman[555736]: [DEBUG netavark::network::core_utils] Setting sysctl value for net.ipv4.ip_forward to 1
podman[555736]: [DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv6/conf/eth0/autoconf to 0
podman[555736]: [INFO  netavark::network::netlink] Adding route (dest: 0.0.0.0/0 ,gw: 10.89.0.1, metric 100)
podman[555736]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-B0A693FBE5D82 exists on table nat
podman[555736]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-B0A693FBE5D82 exists on table nat
podman[555736]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_FORWARD exists on table filter
podman[555736]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_FORWARD exists on table filter
podman[555736]: [DEBUG netavark::firewall::varktables::helpers] rule -d 10.89.0.0/24 -j ACCEPT exists on table nat and chain NETAVARK-B0A693FBE5D82
podman[555736]: [DEBUG netavark::firewall::varktables::helpers] rule ! -d 224.0.0.0/4 -j MASQUERADE exists on table nat and chain NETAVARK-B0A693FBE5D82
podman[555736]: [DEBUG netavark::firewall::varktables::helpers] rule -s 10.89.0.0/24 -j NETAVARK-B0A693FBE5D82 exists on table nat and chain POSTROUTING
podman[555736]: [DEBUG netavark::firewall::varktables::helpers] rule -d 10.89.0.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT exists on table filter and chain NETAVARK_FORWARD
podman[555736]: [DEBUG netavark::firewall::varktables::helpers] rule -s 10.89.0.0/24 -j ACCEPT exists on table filter and chain NETAVARK_FORWARD
podman[555736]: [DEBUG netavark::firewall::iptables] Adding firewalld rules for network 10.89.0.0/24
podman[555736]: [DEBUG netavark::firewall::firewalld] Subnet 10.89.0.0/24 already exists in zone trusted
podman[555736]: [DEBUG netavark::network::core_utils] Setting sysctl value for net.ipv4.conf.podman1.route_localnet to 1
podman[555736]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-SETMARK exists on table nat
podman[555736]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-SETMARK exists on table nat
podman[555736]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-MASQ exists on table nat
podman[555736]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-MASQ exists on table nat
podman[555736]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-DN-B0A693FBE5D82 exists on table nat
podman[555736]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-DN-B0A693FBE5D82 exists on table nat
podman[555736]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-DNAT exists on table nat
podman[555736]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-DNAT exists on table nat
podman[555736]: [DEBUG netavark::firewall::varktables::helpers] rule -j MARK  --set-xmark 0x2000/0x2000 exists on table nat and chain NETAVARK-HOSTPORT-SETMARK
podman[555736]: [DEBUG netavark::firewall::varktables::helpers] rule -j MASQUERADE -m comment --comment 'netavark portfw masq mark' -m mark --mark 0x2000/0x2000 exists on table nat and chain NETAVARK-HOSTPORT-MASQ
podman[555736]: [DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-SETMARK -s 10.89.0.0/24 -p tcp --dport 8080 created on table nat and chain NETAVARK-DN-B0A693FBE5D82
podman[555736]: [DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-SETMARK -s 127.0.0.1 -p tcp --dport 8080 created on table nat and chain NETAVARK-DN-B0A693FBE5D82
podman[555736]: [DEBUG netavark::firewall::varktables::helpers] rule -j DNAT -p tcp --to-destination 10.89.0.23:80 --destination-port 8080 created on table nat and chain NETAVARK-DN-B0A693FBE5D82
podman[555736]: [DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-DN-B0A693FBE5D82 -p tcp --dport 8080 -m comment --comment 'dnat name: nextcloud-net id: 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44' created on table nat and chain NETAVARK-HOSTPORT-DNAT
podman[555736]: [DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-DNAT -m addrtype --dst-type LOCAL exists on table nat and chain PREROUTING
podman[555736]: [DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-DNAT -m addrtype --dst-type LOCAL exists on table nat and chain OUTPUT
podman[555736]: [DEBUG netavark::commands::setup] {
podman[555736]:         "nextcloud-net": StatusBlock {
podman[555736]:             dns_search_domains: Some(
podman[555736]:                 [
podman[555736]:                     "dns.podman",
podman[555736]:                 ],
podman[555736]:             ),
podman[555736]:             dns_server_ips: Some(
podman[555736]:                 [
podman[555736]:                     10.89.0.1,
podman[555736]:                 ],
podman[555736]:             ),
podman[555736]:             interfaces: Some(
podman[555736]:                 {
podman[555736]:                     "eth0": NetInterface {
podman[555736]:                         mac_address: "02:7f:0e:a2:45:2d",
podman[555736]:                         subnets: Some(
podman[555736]:                             [
podman[555736]:                                 NetAddress {
podman[555736]:                                     gateway: Some(
podman[555736]:                                         10.89.0.1,
podman[555736]:                                     ),
podman[555736]:                                     ipnet: 10.89.0.23/24,
podman[555736]:                                 },
podman[555736]:                             ],
podman[555736]:                         ),
podman[555736]:                     },
podman[555736]:                 },
podman[555736]:             ),
podman[555736]:         },
podman[555736]:     }
podman[555736]: [DEBUG netavark::commands::setup] "Setup complete"
podman[555722]: time="2023-06-01T15:51:12+08:00" level=debug msg="Adding nameserver(s) from network status of '[\"10.89.0.1\"]'"
podman[555722]: time="2023-06-01T15:51:12+08:00" level=debug msg="Adding search domain(s) from network status of '[\"dns.podman\"]'"
podman[555722]: time="2023-06-01T15:51:12+08:00" level=debug msg="found local resolver, using \"/run/systemd/resolve/resolv.conf\" to get the nameservers"
podman[555722]: time="2023-06-01T15:51:12+08:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription"
podman[555722]: time="2023-06-01T15:51:12+08:00" level=debug msg="Setting Cgroups for container 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 to machine.slice:libpod:8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44"
podman[555722]: time="2023-06-01T15:51:12+08:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d"
podman[555722]: time="2023-06-01T15:51:12+08:00" level=debug msg="Workdir \"/var/www/html\" resolved to a volume or mount"
podman[555722]: time="2023-06-01T15:51:12+08:00" level=debug msg="Created OCI spec for container 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 at /var/lib/containers/storage/overlay-containers/8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44/userdata/config.json"
podman[555722]: time="2023-06-01T15:51:12+08:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog"
podman[555722]: time="2023-06-01T15:51:12+08:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 -u 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 -r /usr/bin/crun -b /var/lib/containers/storage/overlay-containers/8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44/userdata -p /run/containers/storage/overlay-containers/8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44/userdata/pidfile -n nextcloud --exit-dir /run/libpod/exits --full-attach -s -l journald --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg  --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg boltdb --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44]"
conmon[555864]: conmon 8579d7094b0d8516acdf <ndebug>: addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/12/attach}
conmon[555864]: conmon 8579d7094b0d8516acdf <ndebug>: terminal_ctrl_fd: 12
conmon[555864]: conmon 8579d7094b0d8516acdf <ndebug>: winsz read side: 16, winsz write side: 16
conmon[555864]: conmon 8579d7094b0d8516acdf <ndebug>: container PID: 555867
podman[555722]: time="2023-06-01T15:51:12+08:00" level=debug msg="Received: 555867"
podman[555722]: time="2023-06-01T15:51:12+08:00" level=info msg="Got Conmon PID as 555864"
podman[555722]: time="2023-06-01T15:51:12+08:00" level=debug msg="Created container 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 in OCI runtime"
podman[555722]: 2023-06-01 15:51:12.780138755 +0800 PST m=+0.832962752 container init 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 (image=localhost/nextcloud:ffmpeg-2023-05-31, name=nextcloud, io.buildah.version=1.30.0)
podman[555722]: time="2023-06-01T15:51:12+08:00" level=debug msg="Starting container 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 with command [/entrypoint.sh apache2-foreground]"
podman[555722]: time="2023-06-01T15:51:12+08:00" level=debug msg="Started container 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44"
podman[555722]: time="2023-06-01T15:51:12+08:00" level=debug msg="Notify sent successfully"
podman[555722]: 2023-06-01 15:51:12.790506121 +0800 PST m=+0.843330135 container start 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 (image=localhost/nextcloud:ffmpeg-2023-05-31, name=nextcloud, io.buildah.version=1.30.0)
podman[555722]: 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44
podman[555722]: time="2023-06-01T15:51:12+08:00" level=debug msg="Called start.PersistentPostRunE(/usr/bin/podman start 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 --log-level debug)"
podman[555722]: time="2023-06-01T15:51:12+08:00" level=debug msg="Shutting down engines"
systemd[1]: Started nextcloud-app.service - Podman container-8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44.service.
nextcloud[555864]: AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.89.0.23. Set the 'ServerName' directive globally to suppress this message
Luap99 commented 1 year ago

The debug output clearly shows that we add the require iptables rules. And it is not working after this start?!

basilrabi commented 1 year ago

The debug output clearly shows that we add the require iptables rules. And it is not working after this start?!

Yes, still not working. Even ss -lp | grep 8080 is empty.

basilrabi commented 1 year ago

The collabora container works fine though:

# ss -lp | grep 9980
tcp   LISTEN 0      4096                                                             0.0.0.0:9980                       0.0.0.0:*    users:(("conmon",pid=555370,fd=5))                                                  
Luap99 commented 1 year ago

Did you alter you systemd unit in any way? Can you show me the output of systemctl cat nextcloud-app.service.

basilrabi commented 1 year ago

Did you alter you systemd unit in any way? Can you show me the output of systemctl cat nextcloud-app.service.

I only added --log-level debug

# container-8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44.service
# autogenerated by Podman 4.5.1
# Thu Jun  1 15:48:25 PST 2023

[Unit]
Description=Podman container-8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=/run/containers/storage

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=always
TimeoutStartSec=10
TimeoutStopSec=70
ExecStart=/usr/bin/podman start 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 --log-level debug
ExecStop=/usr/bin/podman stop  \
    -t 10 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 --log-level debug
ExecStopPost=/usr/bin/podman stop  \
    -t 10 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 --log-level debug
PIDFile=/run/containers/storage/overlay-containers/8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44/userdata/conmon.pid
Type=forking

[Install]
WantedBy=default.target
Luap99 commented 1 year ago

Ok something super weird must be going on.

Please show me the output of readlink /proc/self/ns/net and readlink /proc/$(podman container inspect --format {{.State.ConmonPid}} nextcloud)/ns/net Also iptables -nvL -t nat should help to see what rules are actually there.

basilrabi commented 1 year ago
# readlink /proc/self/ns/net
net:[4026531840]

# readlink /proc/$(podman container inspect --format {{.State.ConmonPid}} nextcloud)/ns/net
net:[4026531840]

# iptables -nvL -t nat
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
 199K   33M NETAVARK-HOSTPORT-DNAT  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
39681 3088K NETAVARK-HOSTPORT-DNAT  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
86602 6903K NETAVARK-HOSTPORT-MASQ  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
  130  9369 NETAVARK-B0A693FBE5D82  all  --  *      *       10.89.1.0/24         0.0.0.0/0           
  200 13469 NETAVARK-B0A693FBE5D82  all  --  *      *       10.89.0.0/24         0.0.0.0/0           

Chain NETAVARK-B0A693FBE5D82 (2 references)
 pkts bytes target     prot opt in     out     source               destination         
   98  5904 ACCEPT     all  --  *      *       0.0.0.0/0            10.89.1.0/24        
  185 11100 ACCEPT     all  --  *      *       0.0.0.0/0            10.89.0.0/24        
   14  2100 MASQUERADE  all  --  *      *       0.0.0.0/0           !224.0.0.0/4         

Chain NETAVARK-DN-B0A693FBE5D82 (2 references)
 pkts bytes target     prot opt in     out     source               destination         
  213 12780 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8080 to:10.89.1.2:80
  734 44040 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:9980 to:10.89.1.3:9980
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8080 to:10.89.1.6:80
    0     0 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       10.89.0.0/24         0.0.0.0/0            tcp dpt:9980
    0     0 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       127.0.0.1            0.0.0.0/0            tcp dpt:9980
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:9980 to:10.89.0.21:9980
    0     0 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       10.89.0.0/24         0.0.0.0/0            tcp dpt:8080
    0     0 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       127.0.0.1            0.0.0.0/0            tcp dpt:8080
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8080 to:10.89.0.23:80

Chain NETAVARK-HOSTPORT-DNAT (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    2   120 NETAVARK-DN-B0A693FBE5D82  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:9980 /* dnat name: nextcloud-net id: f4047235d8eac7023d7063f471671a1e0e525b72ec209ee257af5ef3038b3f2d */
    0     0 NETAVARK-DN-B0A693FBE5D82  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8080 /* dnat name: nextcloud-net id: 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 */

Chain NETAVARK-HOSTPORT-MASQ (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    3   160 MASQUERADE  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* netavark portfw masq mark */ mark match 0x2000/0x2000

Chain NETAVARK-HOSTPORT-SETMARK (4 references)
 pkts bytes target     prot opt in     out     source               destination         
    2   120 MARK       all  --  *      *       0.0.0.0/0            0.0.0.0/0            MARK or 0x2000
Luap99 commented 1 year ago

Ok your iptables rules are definitely completely screwed up. This really happens after a reboot? Do you have anything running that saved and restores iptables rules after boot, e.g. iptables-restore? You have rules for two different ipv4 subnets in there so the traffic is matched incorrectly. You need to figure out what creates those rules, however that still does not explain to me why podman does not bind port 8080, that should be visible regardless of firewall rules.

basilrabi commented 1 year ago

This really happens after a reboot?

Yes, but not 100%. Sometimes it works after a reboot.

Do you have anything running that saved and restores iptables rules after boot, e.g. iptables-restore?

I did not configure anything regarding iptables rules in my fedora 37 server.

You have rules for two different ipv4 subnets in there so the traffic is matched incorrectly. You need to figure out what creates those rules

I'm not really sure how to figure this out. I'm using zerotier for VPN which I installed from rpmfusion. Can this affect the iptables rules? I tried stopping it but it had no effect. I'm also using dnsmasq to make the machine as a DNS server but for sure it has no effect in the ip tables rules. I'm pretty sure the other programs I'm using, which use the default fedora configs, have no effect on the iptables rules such as nginx, rstudio-server, postgresql.

Luap99 commented 1 year ago

After a reboot all iptables rules should be cleared so that is why I don't understand why you have some of them especially with a different subnet. On fedora there is firewalld but that should store these rules in my experience, when you run firewall-cmd --reload it should wipe all rules. Confirm that iptables list no rules after that. Then you should be able to run podman network reload --all to recreate the podman firewall rules.

basilrabi commented 1 year ago

On a fresh reboot:

# iptables -nvL -t nat
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Then right after the 2 container are up since they are enabled in systemd (collabora and mariadb):

# iptables -nvL -t nat
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   33  2509 NETAVARK-HOSTPORT-DNAT  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 NETAVARK-HOSTPORT-DNAT  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    4   261 NETAVARK-HOSTPORT-MASQ  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
    3   187 NETAVARK-B0A693FBE5D82  all  --  *      *       10.89.0.0/24         0.0.0.0/0           

Chain NETAVARK-B0A693FBE5D82 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    2   120 ACCEPT     all  --  *      *       0.0.0.0/0            10.89.0.0/24        
    0     0 MASQUERADE  all  --  *      *       0.0.0.0/0           !224.0.0.0/4         

Chain NETAVARK-DN-B0A693FBE5D82 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       10.89.0.0/24         0.0.0.0/0            tcp dpt:9980
    0     0 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       127.0.0.1            0.0.0.0/0            tcp dpt:9980
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:9980 to:10.89.0.3:9980
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:9980 to:10.89.0.6:9980

Chain NETAVARK-HOSTPORT-DNAT (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 NETAVARK-DN-B0A693FBE5D82  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:9980 /* dnat name: nextcloud-net id: f4047235d8eac7023d7063f471671a1e0e525b72ec209ee257af5ef3038b3f2d */

Chain NETAVARK-HOSTPORT-MASQ (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MASQUERADE  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* netavark portfw masq mark */ mark match 0x2000/0x2000

Chain NETAVARK-HOSTPORT-SETMARK (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MARK       all  --  *      *       0.0.0.0/0            0.0.0.0/0            MARK or 0x2000

Afterwards, the enabled nextcloud container service keeps on failing to start. It only starts successfully after 21 attempts:

Jun 01 21:48:15 datamanagement systemd[1]: Failed to start nextcloud-app.service - Podman container-8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44.service.
Jun 01 21:48:15 datamanagement systemd[1]: nextcloud-app.service: Consumed 1.867s CPU time.
Jun 01 21:48:15 datamanagement systemd[1]: nextcloud-app.service: Scheduled restart job, restart counter is at 21.
Jun 01 21:48:15 datamanagement systemd[1]: Stopped nextcloud-app.service - Podman container-8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44.service.
Jun 01 21:48:15 datamanagement systemd[1]: nextcloud-app.service: Consumed 1.867s CPU time.
Jun 01 21:48:15 datamanagement systemd[1]: Starting nextcloud-app.service - Podman container-8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44.service...
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=info msg="/usr/bin/podman filtering at log level debug"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Called start.PersistentPreRunE(/usr/bin/podman start 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 --log-level debug)"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Using conmon: \"/usr/bin/conmon\""
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Using graph driver overlay"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Using graph root /var/lib/containers/storage"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Using run root /run/containers/storage"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Using tmp dir /run/libpod"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Using transient store: false"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="[graphdriver] trying provided driver \"overlay\""
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Cached value indicated that overlay is supported"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Cached value indicated that overlay is supported"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Cached value indicated that metacopy is being used"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Cached value indicated that native-diff is not being used"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Initializing event backend journald"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\""
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=info msg="Setting parallel job count to 49"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Cached value indicated that idmapped mounts for overlay are supported"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Made network namespace at /run/netns/netns-4bf54521-00a2-3901-f629-58a6533b7ff4 for container 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Successfully loaded network nextcloud-net: &{nextcloud-net ed82155c4b467840190f734a8cea1e1e786831fd18243299e668510c20ba37b7 bridge podman1 2023-05-31 11:53:17.709174091 +0800 PST [{{{10.89.0.0 ffffff00}} 10.89.0.1 <nil>}] false false true [] map[] map[] map[driver:host-local]}"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Successfully loaded 2 networks"
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::network::validation] "Validating network namespace..."
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::commands::setup] "Setting up..."
Jun 01 21:48:15 datamanagement podman[10912]: [INFO  netavark::firewall] Using iptables firewall driver
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::network::bridge] Setup network nextcloud-net
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::network::bridge] Container interface name: eth0 with IP addresses [10.89.0.26/24]
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::network::bridge] Bridge name: podman1 with IP addresses [10.89.0.1/24]
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::network::core_utils] Setting sysctl value for net.ipv4.ip_forward to 1
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv6/conf/eth0/autoconf to 0
Jun 01 21:48:15 datamanagement podman[10912]: [INFO  netavark::network::netlink] Adding route (dest: 0.0.0.0/0 ,gw: 10.89.0.1, metric 100)
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-B0A693FBE5D82 exists on table nat
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-B0A693FBE5D82 exists on table nat
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_FORWARD exists on table filter
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_FORWARD exists on table filter
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::firewall::varktables::helpers] rule -d 10.89.0.0/24 -j ACCEPT exists on table nat and chain NETAVARK-B0A693FBE5D82
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::firewall::varktables::helpers] rule ! -d 224.0.0.0/4 -j MASQUERADE exists on table nat and chain NETAVARK-B0A693FBE5D82
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::firewall::varktables::helpers] rule -s 10.89.0.0/24 -j NETAVARK-B0A693FBE5D82 exists on table nat and chain POSTROUTING
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::firewall::varktables::helpers] rule -d 10.89.0.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT exists on table filter and chain NETAVARK_FORWARD
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::firewall::varktables::helpers] rule -s 10.89.0.0/24 -j ACCEPT exists on table filter and chain NETAVARK_FORWARD
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::firewall::iptables] Adding firewalld rules for network 10.89.0.0/24
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::firewall::firewalld] Subnet 10.89.0.0/24 already exists in zone trusted
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::network::core_utils] Setting sysctl value for net.ipv4.conf.podman1.route_localnet to 1
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-SETMARK exists on table nat
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-SETMARK exists on table nat
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-MASQ exists on table nat
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-MASQ exists on table nat
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-DN-B0A693FBE5D82 exists on table nat
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-DN-B0A693FBE5D82 exists on table nat
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-DNAT exists on table nat
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-DNAT exists on table nat
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::firewall::varktables::helpers] rule -j MARK  --set-xmark 0x2000/0x2000 exists on table nat and chain NETAVARK-HOSTPORT-SETMARK
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::firewall::varktables::helpers] rule -j MASQUERADE -m comment --comment 'netavark portfw masq mark' -m mark --mark 0x2000/0x2000 exists on table nat and chain NETAVARK-HOSTPORT-MASQ
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-SETMARK -s 10.89.0.0/24 -p tcp --dport 8080 created on table nat and chain NETAVARK-DN-B0A693FBE5D82
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/BLKHWGEIQUXMA6OQVR576AGIQ5:/var/lib/containers/storage/overlay/l/6MDDPR4UMJXCB4S2ZT7CBWO4XC:/var/lib/containers/storage/overlay/l/K2E4QGFFVFWO3E7YEUICWCVGYB:/var/lib/containers/storage/overlay/l/LYT3IWSB3NUOIP37VVEA7QX4Q2:/var/lib/containers/storage/overlay/l/SSF5R4MQ5RQSYZMNDX7WTOK5ER:/var/lib/containers/storage/overlay/l/3JSPD5ATWK5DSLJEKDK7P6L7LP:/var/lib/containers/storage/overlay/l/WUJLU5YPY3GG66AWWAD5C7H4MQ:/var/lib/containers/storage/overlay/l/DJJPNOFELWER7YGUJVRUQFQDB2:/var/lib/containers/storage/overlay/l/OY6KYKDZG6FEQWCDKF4IMYSRMW:/var/lib/containers/storage/overlay/l/OV36AVSN3JLKGPH3OHMLZBBYRU:/var/lib/containers/storage/overlay/l/EM5DBPVHIAEGGD3T7I7NUST4LT:/var/lib/containers/storage/overlay/l/TABUIF5ZLTCPIEMTQMJWQGX3SX:/var/lib/containers/storage/overlay/l/4KYFYT2D4EWG4BVJZY5TBF3MC7:/var/lib/containers/storage/overlay/l/C343GGUASW5ZMAJJPRKFT57SOP:/var/lib/containers/storage/overlay/l/QZGTGUIJSC5PPYUFP4ADRJQH3U:/var/lib/containers/storage/overlay/l/OY7IEEOLIVWQIAZNY4BHY6G2DI:/var/lib/containers/storage/overlay/l/XPKN6YQAIAPRYIOE6UAGIXYJ7Q:/var/lib/containers/storage/overlay/l/TNUUPP25IXLAZLGLPPJI4XFGVO:/var/lib/containers/storage/overlay/l/JVMSDHMTUFNNQGW333XXRSRQKP:/var/lib/containers/storage/overlay/l/VUURUXEPAHHAUB54JKHGBHZP4Z:/var/lib/containers/storage/overlay/l/FCQ36HISR7UV4VUAPQ5UO3NGZY,upperdir=/var/lib/containers/storage/overlay/e9b752e4f1145d53abedfe87b381d3b7ff63437d1ef893c551b556fa2459344c/diff,workdir=/var/lib/containers/storage/overlay/e9b752e4f1145d53abedfe87b381d3b7ff63437d1ef893c551b556fa2459344c/work,nodev,metacopy=on,context=\"system_u:object_r:container_file_t:s0:c65,c795\""
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-SETMARK -s 127.0.0.1 -p tcp --dport 8080 created on table nat and chain NETAVARK-DN-B0A693FBE5D82
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::firewall::varktables::helpers] rule -j DNAT -p tcp --to-destination 10.89.0.26:80 --destination-port 8080 created on table nat and chain NETAVARK-DN-B0A693FBE5D82
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-DN-B0A693FBE5D82 -p tcp --dport 8080 -m comment --comment 'dnat name: nextcloud-net id: 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44' created on table nat and chain NETAVARK-HOSTPORT-DNAT
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Mounted container \"8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44\" at \"/var/lib/containers/storage/overlay/e9b752e4f1145d53abedfe87b381d3b7ff63437d1ef893c551b556fa2459344c/merged\""
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Created root filesystem for container 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 at /var/lib/containers/storage/overlay/e9b752e4f1145d53abedfe87b381d3b7ff63437d1ef893c551b556fa2459344c/merged"
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-DNAT -m addrtype --dst-type LOCAL exists on table nat and chain PREROUTING
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-DNAT -m addrtype --dst-type LOCAL exists on table nat and chain OUTPUT
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::commands::setup] {
Jun 01 21:48:15 datamanagement podman[10912]:         "nextcloud-net": StatusBlock {
Jun 01 21:48:15 datamanagement podman[10912]:             dns_search_domains: Some(
Jun 01 21:48:15 datamanagement podman[10912]:                 [
Jun 01 21:48:15 datamanagement podman[10912]:                     "dns.podman",
Jun 01 21:48:15 datamanagement podman[10912]:                 ],
Jun 01 21:48:15 datamanagement podman[10912]:             ),
Jun 01 21:48:15 datamanagement podman[10912]:             dns_server_ips: Some(
Jun 01 21:48:15 datamanagement podman[10912]:                 [
Jun 01 21:48:15 datamanagement podman[10912]:                     10.89.0.1,
Jun 01 21:48:15 datamanagement podman[10912]:                 ],
Jun 01 21:48:15 datamanagement podman[10912]:             ),
Jun 01 21:48:15 datamanagement podman[10912]:             interfaces: Some(
Jun 01 21:48:15 datamanagement podman[10912]:                 {
Jun 01 21:48:15 datamanagement podman[10912]:                     "eth0": NetInterface {
Jun 01 21:48:15 datamanagement podman[10912]:                         mac_address: "fa:50:ac:e2:c1:76",
Jun 01 21:48:15 datamanagement podman[10912]:                         subnets: Some(
Jun 01 21:48:15 datamanagement podman[10912]:                             [
Jun 01 21:48:15 datamanagement podman[10912]:                                 NetAddress {
Jun 01 21:48:15 datamanagement podman[10912]:                                     gateway: Some(
Jun 01 21:48:15 datamanagement podman[10912]:                                         10.89.0.1,
Jun 01 21:48:15 datamanagement podman[10912]:                                     ),
Jun 01 21:48:15 datamanagement podman[10912]:                                     ipnet: 10.89.0.26/24,
Jun 01 21:48:15 datamanagement podman[10912]:                                 },
Jun 01 21:48:15 datamanagement podman[10912]:                             ],
Jun 01 21:48:15 datamanagement podman[10912]:                         ),
Jun 01 21:48:15 datamanagement podman[10912]:                     },
Jun 01 21:48:15 datamanagement podman[10912]:                 },
Jun 01 21:48:15 datamanagement podman[10912]:             ),
Jun 01 21:48:15 datamanagement podman[10912]:         },
Jun 01 21:48:15 datamanagement podman[10912]:     }
Jun 01 21:48:15 datamanagement podman[10912]: [DEBUG netavark::commands::setup] "Setup complete"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Adding nameserver(s) from network status of '[\"10.89.0.1\"]'"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="Adding search domain(s) from network status of '[\"dns.podman\"]'"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="found local resolver, using \"/run/systemd/resolve/resolv.conf\" to get the nameservers"
Jun 01 21:48:15 datamanagement podman[10897]: time="2023-06-01T21:48:15+08:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription"
Jun 01 21:48:25 datamanagement systemd[1]: nextcloud-app.service: start operation timed out. Terminating.
Jun 01 21:48:25 datamanagement podman[10897]: time="2023-06-01T21:48:25+08:00" level=info msg="Received shutdown signal \"terminated\", terminating!" PID=10897
Jun 01 21:48:25 datamanagement podman[10897]: time="2023-06-01T21:48:25+08:00" level=info msg="Invoking shutdown handler \"libpod\"" PID=10897
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=info msg="/usr/bin/podman filtering at log level debug"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Called stop.PersistentPreRunE(/usr/bin/podman stop -t 10 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 --log-level debug)"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Using conmon: \"/usr/bin/conmon\""
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Using graph driver overlay"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Using graph root /var/lib/containers/storage"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Using run root /run/containers/storage"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Using tmp dir /run/libpod"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Using transient store: false"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="[graphdriver] trying provided driver \"overlay\""
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Cached value indicated that overlay is supported"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Cached value indicated that overlay is supported"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Cached value indicated that metacopy is being used"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Cached value indicated that native-diff is not being used"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Initializing event backend journald"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\""
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=info msg="Setting parallel job count to 49"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Starting parallel job on container 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Stopping ctr 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 (timeout 10)"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Container 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 is already stopped"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Cleaning up container 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Tearing down network namespace at /run/netns/netns-4bf54521-00a2-3901-f629-58a6533b7ff4 for container 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Successfully loaded network nextcloud-net: &{nextcloud-net ed82155c4b467840190f734a8cea1e1e786831fd18243299e668510c20ba37b7 bridge podman1 2023-05-31 11:53:17.709174091 +0800 PST [{{{10.89.0.0 ffffff00}} 10.89.0.1 <nil>}] false false true [] map[] map[] map[driver:host-local]}"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Successfully loaded 2 networks"
Jun 01 21:48:25 datamanagement podman[11036]: [DEBUG netavark::commands::teardown] "Tearing down.."
Jun 01 21:48:25 datamanagement podman[11036]: [INFO  netavark::firewall] Using iptables firewall driver
Jun 01 21:48:25 datamanagement podman[11036]: [DEBUG netavark::commands::teardown] "Teardown complete"
Jun 01 21:48:26 datamanagement podman[11021]: time="2023-06-01T21:48:26+08:00" level=debug msg="Unmounted container \"8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44\""
Jun 01 21:48:26 datamanagement podman[11021]: 2023-06-01 21:48:26.317111574 +0800 PST m=+0.566072414 container cleanup 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 (image=localhost/nextcloud:ffmpeg-2023-05-31, name=nextcloud, io.buildah.version=1.30.0)
Jun 01 21:48:26 datamanagement podman[11021]: 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44
Jun 01 21:48:26 datamanagement podman[11021]: time="2023-06-01T21:48:26+08:00" level=debug msg="Called stop.PersistentPostRunE(/usr/bin/podman stop -t 10 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 --log-level debug)"
Jun 01 21:48:26 datamanagement podman[11021]: time="2023-06-01T21:48:26+08:00" level=debug msg="Shutting down engines"
Jun 01 21:48:26 datamanagement systemd[1]: nextcloud-app.service: Failed with result 'timeout'.
Jun 01 21:48:26 datamanagement systemd[1]: Failed to start nextcloud-app.service - Podman container-8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44.service.
Jun 01 21:48:26 datamanagement systemd[1]: nextcloud-app.service: Consumed 2.672s CPU time.
Jun 01 21:48:26 datamanagement systemd[1]: nextcloud-app.service: Scheduled restart job, restart counter is at 22.
Jun 01 21:48:26 datamanagement systemd[1]: Stopped nextcloud-app.service - Podman container-8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44.service.
Jun 01 21:48:26 datamanagement systemd[1]: nextcloud-app.service: Consumed 2.672s CPU time.
Jun 01 21:48:26 datamanagement systemd[1]: Starting nextcloud-app.service - Podman container-8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44.service...
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=info msg="/usr/bin/podman filtering at log level debug"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Called start.PersistentPreRunE(/usr/bin/podman start 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 --log-level debug)"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Using conmon: \"/usr/bin/conmon\""
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Using graph driver overlay"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Using graph root /var/lib/containers/storage"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Using run root /run/containers/storage"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Using tmp dir /run/libpod"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Using transient store: false"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="[graphdriver] trying provided driver \"overlay\""
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Cached value indicated that overlay is supported"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Cached value indicated that overlay is supported"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Cached value indicated that metacopy is being used"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Cached value indicated that native-diff is not being used"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Initializing event backend journald"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\""
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=info msg="Setting parallel job count to 49"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Made network namespace at /run/netns/netns-476beee5-6a70-a5be-3ed2-7e3ea89ed6fc for container 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Cached value indicated that idmapped mounts for overlay are supported"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Successfully loaded network nextcloud-net: &{nextcloud-net ed82155c4b467840190f734a8cea1e1e786831fd18243299e668510c20ba37b7 bridge podman1 2023-05-31 11:53:17.709174091 +0800 PST [{{{10.89.0.0 ffffff00}} 10.89.0.1 <nil>}] false false true [] map[] map[] map[driver:host-local]}"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Successfully loaded 2 networks"
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::network::validation] "Validating network namespace..."
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::commands::setup] "Setting up..."
Jun 01 21:48:26 datamanagement podman[11101]: [INFO  netavark::firewall] Using iptables firewall driver
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::network::bridge] Setup network nextcloud-net
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::network::bridge] Container interface name: eth0 with IP addresses [10.89.0.27/24]
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::network::bridge] Bridge name: podman1 with IP addresses [10.89.0.1/24]
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::network::core_utils] Setting sysctl value for net.ipv4.ip_forward to 1
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv6/conf/eth0/autoconf to 0
Jun 01 21:48:26 datamanagement podman[11101]: [INFO  netavark::network::netlink] Adding route (dest: 0.0.0.0/0 ,gw: 10.89.0.1, metric 100)
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-B0A693FBE5D82 exists on table nat
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-B0A693FBE5D82 exists on table nat
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_FORWARD exists on table filter
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_FORWARD exists on table filter
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::firewall::varktables::helpers] rule -d 10.89.0.0/24 -j ACCEPT exists on table nat and chain NETAVARK-B0A693FBE5D82
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::firewall::varktables::helpers] rule ! -d 224.0.0.0/4 -j MASQUERADE exists on table nat and chain NETAVARK-B0A693FBE5D82
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::firewall::varktables::helpers] rule -s 10.89.0.0/24 -j NETAVARK-B0A693FBE5D82 exists on table nat and chain POSTROUTING
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::firewall::varktables::helpers] rule -d 10.89.0.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT exists on table filter and chain NETAVARK_FORWARD
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/BLKHWGEIQUXMA6OQVR576AGIQ5:/var/lib/containers/storage/overlay/l/6MDDPR4UMJXCB4S2ZT7CBWO4XC:/var/lib/containers/storage/overlay/l/K2E4QGFFVFWO3E7YEUICWCVGYB:/var/lib/containers/storage/overlay/l/LYT3IWSB3NUOIP37VVEA7QX4Q2:/var/lib/containers/storage/overlay/l/SSF5R4MQ5RQSYZMNDX7WTOK5ER:/var/lib/containers/storage/overlay/l/3JSPD5ATWK5DSLJEKDK7P6L7LP:/var/lib/containers/storage/overlay/l/WUJLU5YPY3GG66AWWAD5C7H4MQ:/var/lib/containers/storage/overlay/l/DJJPNOFELWER7YGUJVRUQFQDB2:/var/lib/containers/storage/overlay/l/OY6KYKDZG6FEQWCDKF4IMYSRMW:/var/lib/containers/storage/overlay/l/OV36AVSN3JLKGPH3OHMLZBBYRU:/var/lib/containers/storage/overlay/l/EM5DBPVHIAEGGD3T7I7NUST4LT:/var/lib/containers/storage/overlay/l/TABUIF5ZLTCPIEMTQMJWQGX3SX:/var/lib/containers/storage/overlay/l/4KYFYT2D4EWG4BVJZY5TBF3MC7:/var/lib/containers/storage/overlay/l/C343GGUASW5ZMAJJPRKFT57SOP:/var/lib/containers/storage/overlay/l/QZGTGUIJSC5PPYUFP4ADRJQH3U:/var/lib/containers/storage/overlay/l/OY7IEEOLIVWQIAZNY4BHY6G2DI:/var/lib/containers/storage/overlay/l/XPKN6YQAIAPRYIOE6UAGIXYJ7Q:/var/lib/containers/storage/overlay/l/TNUUPP25IXLAZLGLPPJI4XFGVO:/var/lib/containers/storage/overlay/l/JVMSDHMTUFNNQGW333XXRSRQKP:/var/lib/containers/storage/overlay/l/VUURUXEPAHHAUB54JKHGBHZP4Z:/var/lib/containers/storage/overlay/l/FCQ36HISR7UV4VUAPQ5UO3NGZY,upperdir=/var/lib/containers/storage/overlay/e9b752e4f1145d53abedfe87b381d3b7ff63437d1ef893c551b556fa2459344c/diff,workdir=/var/lib/containers/storage/overlay/e9b752e4f1145d53abedfe87b381d3b7ff63437d1ef893c551b556fa2459344c/work,nodev,metacopy=on,context=\"system_u:object_r:container_file_t:s0:c65,c795\""
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::firewall::varktables::helpers] rule -s 10.89.0.0/24 -j ACCEPT exists on table filter and chain NETAVARK_FORWARD
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::firewall::iptables] Adding firewalld rules for network 10.89.0.0/24
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::firewall::firewalld] Subnet 10.89.0.0/24 already exists in zone trusted
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::network::core_utils] Setting sysctl value for net.ipv4.conf.podman1.route_localnet to 1
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-SETMARK exists on table nat
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-SETMARK exists on table nat
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-MASQ exists on table nat
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-MASQ exists on table nat
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-DN-B0A693FBE5D82 exists on table nat
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-DN-B0A693FBE5D82 exists on table nat
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-DNAT exists on table nat
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-DNAT exists on table nat
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::firewall::varktables::helpers] rule -j MARK  --set-xmark 0x2000/0x2000 exists on table nat and chain NETAVARK-HOSTPORT-SETMARK
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Mounted container \"8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44\" at \"/var/lib/containers/storage/overlay/e9b752e4f1145d53abedfe87b381d3b7ff63437d1ef893c551b556fa2459344c/merged\""
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Created root filesystem for container 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 at /var/lib/containers/storage/overlay/e9b752e4f1145d53abedfe87b381d3b7ff63437d1ef893c551b556fa2459344c/merged"
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::firewall::varktables::helpers] rule -j MASQUERADE -m comment --comment 'netavark portfw masq mark' -m mark --mark 0x2000/0x2000 exists on table nat and chain NETAVARK-HOSTPORT-MASQ
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-SETMARK -s 10.89.0.0/24 -p tcp --dport 8080 created on table nat and chain NETAVARK-DN-B0A693FBE5D82
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-SETMARK -s 127.0.0.1 -p tcp --dport 8080 created on table nat and chain NETAVARK-DN-B0A693FBE5D82
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::firewall::varktables::helpers] rule -j DNAT -p tcp --to-destination 10.89.0.27:80 --destination-port 8080 created on table nat and chain NETAVARK-DN-B0A693FBE5D82
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-DN-B0A693FBE5D82 -p tcp --dport 8080 -m comment --comment 'dnat name: nextcloud-net id: 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44' created on table nat and chain NETAVARK-HOSTPORT-DNAT
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-DNAT -m addrtype --dst-type LOCAL exists on table nat and chain PREROUTING
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-DNAT -m addrtype --dst-type LOCAL exists on table nat and chain OUTPUT
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::commands::setup] {
Jun 01 21:48:26 datamanagement podman[11101]:         "nextcloud-net": StatusBlock {
Jun 01 21:48:26 datamanagement podman[11101]:             dns_search_domains: Some(
Jun 01 21:48:26 datamanagement podman[11101]:                 [
Jun 01 21:48:26 datamanagement podman[11101]:                     "dns.podman",
Jun 01 21:48:26 datamanagement podman[11101]:                 ],
Jun 01 21:48:26 datamanagement podman[11101]:             ),
Jun 01 21:48:26 datamanagement podman[11101]:             dns_server_ips: Some(
Jun 01 21:48:26 datamanagement podman[11101]:                 [
Jun 01 21:48:26 datamanagement podman[11101]:                     10.89.0.1,
Jun 01 21:48:26 datamanagement podman[11101]:                 ],
Jun 01 21:48:26 datamanagement podman[11101]:             ),
Jun 01 21:48:26 datamanagement podman[11101]:             interfaces: Some(
Jun 01 21:48:26 datamanagement podman[11101]:                 {
Jun 01 21:48:26 datamanagement podman[11101]:                     "eth0": NetInterface {
Jun 01 21:48:26 datamanagement podman[11101]:                         mac_address: "62:10:a7:83:df:09",
Jun 01 21:48:26 datamanagement podman[11101]:                         subnets: Some(
Jun 01 21:48:26 datamanagement podman[11101]:                             [
Jun 01 21:48:26 datamanagement podman[11101]:                                 NetAddress {
Jun 01 21:48:26 datamanagement podman[11101]:                                     gateway: Some(
Jun 01 21:48:26 datamanagement podman[11101]:                                         10.89.0.1,
Jun 01 21:48:26 datamanagement podman[11101]:                                     ),
Jun 01 21:48:26 datamanagement podman[11101]:                                     ipnet: 10.89.0.27/24,
Jun 01 21:48:26 datamanagement podman[11101]:                                 },
Jun 01 21:48:26 datamanagement podman[11101]:                             ],
Jun 01 21:48:26 datamanagement podman[11101]:                         ),
Jun 01 21:48:26 datamanagement podman[11101]:                     },
Jun 01 21:48:26 datamanagement podman[11101]:                 },
Jun 01 21:48:26 datamanagement podman[11101]:             ),
Jun 01 21:48:26 datamanagement podman[11101]:         },
Jun 01 21:48:26 datamanagement podman[11101]:     }
Jun 01 21:48:26 datamanagement podman[11101]: [DEBUG netavark::commands::setup] "Setup complete"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Adding nameserver(s) from network status of '[\"10.89.0.1\"]'"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="Adding search domain(s) from network status of '[\"dns.podman\"]'"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="found local resolver, using \"/run/systemd/resolve/resolv.conf\" to get the nameservers"
Jun 01 21:48:26 datamanagement podman[11086]: time="2023-06-01T21:48:26+08:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription"
Jun 01 21:48:30 datamanagement podman[11086]: time="2023-06-01T21:48:30+08:00" level=debug msg="Setting Cgroups for container 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 to machine.slice:libpod:8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44"
Jun 01 21:48:30 datamanagement podman[11086]: time="2023-06-01T21:48:30+08:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d"
Jun 01 21:48:30 datamanagement podman[11086]: time="2023-06-01T21:48:30+08:00" level=debug msg="Workdir \"/var/www/html\" resolved to a volume or mount"
Jun 01 21:48:30 datamanagement podman[11086]: time="2023-06-01T21:48:30+08:00" level=debug msg="Created OCI spec for container 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 at /var/lib/containers/storage/overlay-containers/8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44/userdata/config.json"
Jun 01 21:48:30 datamanagement podman[11086]: time="2023-06-01T21:48:30+08:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog"
Jun 01 21:48:30 datamanagement podman[11086]: time="2023-06-01T21:48:30+08:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 -u 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 -r /usr/bin/crun -b /var/lib/containers/storage/overlay-containers/8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44/userdata -p /run/containers/storage/overlay-containers/8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44/userdata/pidfile -n nextcloud --exit-dir /run/libpod/exits --full-attach -s -l journald --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg  --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg boltdb --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44]"
Jun 01 21:48:30 datamanagement conmon[11210]: conmon 8579d7094b0d8516acdf <ndebug>: addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/12/attach}
Jun 01 21:48:30 datamanagement conmon[11210]: conmon 8579d7094b0d8516acdf <ndebug>: terminal_ctrl_fd: 12
Jun 01 21:48:30 datamanagement conmon[11210]: conmon 8579d7094b0d8516acdf <ndebug>: winsz read side: 16, winsz write side: 16
Jun 01 21:48:30 datamanagement conmon[11210]: conmon 8579d7094b0d8516acdf <ndebug>: container PID: 11212
Jun 01 21:48:30 datamanagement podman[11086]: time="2023-06-01T21:48:30+08:00" level=debug msg="Received: 11212"
Jun 01 21:48:30 datamanagement podman[11086]: time="2023-06-01T21:48:30+08:00" level=info msg="Got Conmon PID as 11210"
Jun 01 21:48:30 datamanagement podman[11086]: time="2023-06-01T21:48:30+08:00" level=debug msg="Created container 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 in OCI runtime"
Jun 01 21:48:30 datamanagement podman[11086]: 2023-06-01 21:48:30.921737735 +0800 PST m=+4.270041128 container init 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 (image=localhost/nextcloud:ffmpeg-2023-05-31, name=nextcloud, io.buildah.version=1.30.0)
Jun 01 21:48:30 datamanagement podman[11086]: time="2023-06-01T21:48:30+08:00" level=debug msg="Starting container 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 with command [/entrypoint.sh apache2-foreground]"
Jun 01 21:48:30 datamanagement podman[11086]: time="2023-06-01T21:48:30+08:00" level=debug msg="Started container 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44"
Jun 01 21:48:30 datamanagement podman[11086]: time="2023-06-01T21:48:30+08:00" level=debug msg="Notify sent successfully"
Jun 01 21:48:32 datamanagement podman[11086]: 2023-06-01 21:48:32.03152684 +0800 PST m=+5.379830276 container start 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 (image=localhost/nextcloud:ffmpeg-2023-05-31, name=nextcloud, io.buildah.version=1.30.0)
Jun 01 21:48:32 datamanagement podman[11086]: 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44
Jun 01 21:48:32 datamanagement podman[11086]: time="2023-06-01T21:48:32+08:00" level=debug msg="Called start.PersistentPostRunE(/usr/bin/podman start 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 --log-level debug)"
Jun 01 21:48:32 datamanagement podman[11086]: time="2023-06-01T21:48:32+08:00" level=debug msg="Shutting down engines"
Jun 01 21:48:32 datamanagement systemd[1]: Started nextcloud-app.service - Podman container-8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44.service.
Jun 01 21:48:38 datamanagement nextcloud[11210]: AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.89.0.27. Set the 'ServerName' directive globally to suppress this message

Now that the nextcloud container is running, port 8080 is still not published while iptables says:

# iptables -nvL -t nat
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
  550 42234 NETAVARK-HOSTPORT-DNAT  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   23  1426 NETAVARK-HOSTPORT-DNAT  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   98  6715 NETAVARK-HOSTPORT-MASQ  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
    7   427 NETAVARK-B0A693FBE5D82  all  --  *      *       10.89.0.0/24         0.0.0.0/0           

Chain NETAVARK-B0A693FBE5D82 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    5   300 ACCEPT     all  --  *      *       0.0.0.0/0            10.89.0.0/24        
    1    60 MASQUERADE  all  --  *      *       0.0.0.0/0           !224.0.0.0/4         

Chain NETAVARK-DN-B0A693FBE5D82 (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       10.89.0.0/24         0.0.0.0/0            tcp dpt:9980
    0     0 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       127.0.0.1            0.0.0.0/0            tcp dpt:9980
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:9980 to:10.89.0.3:9980
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:9980 to:10.89.0.6:9980
    0     0 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       10.89.0.0/24         0.0.0.0/0            tcp dpt:8080
    0     0 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       127.0.0.1            0.0.0.0/0            tcp dpt:8080
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8080 to:10.89.0.27:80

Chain NETAVARK-HOSTPORT-DNAT (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 NETAVARK-DN-B0A693FBE5D82  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:9980 /* dnat name: nextcloud-net id: f4047235d8eac7023d7063f471671a1e0e525b72ec209ee257af5ef3038b3f2d */
    0     0 NETAVARK-DN-B0A693FBE5D82  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8080 /* dnat name: nextcloud-net id: 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 */

Chain NETAVARK-HOSTPORT-MASQ (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MASQUERADE  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* netavark portfw masq mark */ mark match 0x2000/0x2000

Chain NETAVARK-HOSTPORT-SETMARK (4 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MARK       all  --  *      *       0.0.0.0/0            0.0.0.0/0            MARK or 0x2000

After running firewall-cmd --reload:

# iptables -nvL -t nat
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination  

Then running podman network reload --all:

# iptables -nvL -t nat
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    8   646 NETAVARK-HOSTPORT-DNAT  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 NETAVARK-HOSTPORT-DNAT  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 NETAVARK-HOSTPORT-MASQ  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
    0     0 NETAVARK-B0A693FBE5D82  all  --  *      *       10.89.0.0/24         0.0.0.0/0           

Chain NETAVARK-B0A693FBE5D82 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            10.89.0.0/24        
    0     0 MASQUERADE  all  --  *      *       0.0.0.0/0           !224.0.0.0/4         

Chain NETAVARK-DN-B0A693FBE5D82 (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       10.89.0.0/24         0.0.0.0/0            tcp dpt:8080
    0     0 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       127.0.0.1            0.0.0.0/0            tcp dpt:8080
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8080 to:10.89.0.27:80
    0     0 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       10.89.0.0/24         0.0.0.0/0            tcp dpt:9980
    0     0 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       127.0.0.1            0.0.0.0/0            tcp dpt:9980
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:9980 to:10.89.0.6:9980

Chain NETAVARK-HOSTPORT-DNAT (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 NETAVARK-DN-B0A693FBE5D82  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8080 /* dnat name: nextcloud-net id: 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 */
    0     0 NETAVARK-DN-B0A693FBE5D82  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:9980 /* dnat name: nextcloud-net id: f4047235d8eac7023d7063f471671a1e0e525b72ec209ee257af5ef3038b3f2d */

Chain NETAVARK-HOSTPORT-MASQ (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MASQUERADE  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* netavark portfw masq mark */ mark match 0x2000/0x2000

Chain NETAVARK-HOSTPORT-SETMARK (4 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MARK       all  --  *      *       0.0.0.0/0            0.0.0.0/0            MARK or 0x2000

However, port 8080 is still not exposed:

ss -lp | grep 8080
# BLANK
Luap99 commented 1 year ago

Why does the nextcloud container fail to start, please check the full log for all errors? It is very possible that both --restart on-failure and systemd are now conflicting when it starts to fail. I strongly recommend that you recreate the container and remove the --restart option.

basilrabi commented 1 year ago

Wow this is crazy. ss -lp | grep 8080 is still blank right now but I tried connecting via port 8080 and I connected to the nextcloud instance successfully!

basilrabi commented 1 year ago

Why does the nextcloud container fail to start, please check the full log for all errors?

How do I check the full log? According the the debug log before the failure:

Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\""
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=info msg="Setting parallel job count to 49"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Starting parallel job on container 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Stopping ctr 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 (timeout 10)"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Container 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 is already stopped"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Cleaning up container 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Tearing down network namespace at /run/netns/netns-4bf54521-00a2-3901-f629-58a6533b7ff4 for container 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Successfully loaded network nextcloud-net: &{nextcloud-net ed82155c4b467840190f734a8cea1e1e786831fd18243299e668510c20ba37b7 bridge podman1 2023-05-31 11:53:17.709174091 +0800 PST [{{{10.89.0.0 ffffff00}} 10.89.0.1 <nil>}] false false true [] map[] map[] map[driver:host-local]}"
Jun 01 21:48:25 datamanagement podman[11021]: time="2023-06-01T21:48:25+08:00" level=debug msg="Successfully loaded 2 networks"
Jun 01 21:48:25 datamanagement podman[11036]: [DEBUG netavark::commands::teardown] "Tearing down.."
Jun 01 21:48:25 datamanagement podman[11036]: [INFO  netavark::firewall] Using iptables firewall driver
Jun 01 21:48:25 datamanagement podman[11036]: [DEBUG netavark::commands::teardown] "Teardown complete"
Jun 01 21:48:26 datamanagement podman[11021]: time="2023-06-01T21:48:26+08:00" level=debug msg="Unmounted container \"8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44\""
Jun 01 21:48:26 datamanagement podman[11021]: 2023-06-01 21:48:26.317111574 +0800 PST m=+0.566072414 container cleanup 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 (image=localhost/nextcloud:ffmpeg-2023-05-31, name=nextcloud, io.buildah.version=1.30.0)
Jun 01 21:48:26 datamanagement podman[11021]: 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44
Jun 01 21:48:26 datamanagement podman[11021]: time="2023-06-01T21:48:26+08:00" level=debug msg="Called stop.PersistentPostRunE(/usr/bin/podman stop -t 10 8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44 --log-level debug)"
Jun 01 21:48:26 datamanagement podman[11021]: time="2023-06-01T21:48:26+08:00" level=debug msg="Shutting down engines"
Jun 01 21:48:26 datamanagement systemd[1]: nextcloud-app.service: Failed with result 'timeout'.
Jun 01 21:48:26 datamanagement systemd[1]: Failed to start nextcloud-app.service - Podman container-8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44.service.
Jun 01 21:48:26 datamanagement systemd[1]: nextcloud-app.service: Consumed 2.672s CPU time.
Jun 01 21:48:26 datamanagement systemd[1]: nextcloud-app.service: Scheduled restart job, restart counter is at 22.
Jun 01 21:48:26 datamanagement systemd[1]: Stopped nextcloud-app.service - Podman container-8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44.service.
Jun 01 21:48:26 datamanagement systemd[1]: nextcloud-app.service: Consumed 2.672s CPU time.
Jun 01 21:48:26 datamanagement systemd[1]: Starting nextcloud-app.service - Podman container-8579d7094b0d8516acdf3c5a7dec12bf34e2aaca28b9ee4dd95b68c2934d6a44.service...
basilrabi commented 1 year ago

I just restarted again earlier today due to kernel update. I also updated the images and re-created the containers. Now, I have experienced the issue in the collabora online container. I can't connect via the bound port of 9980 but ss says:

 ss -lp | grep 9980
tcp   LISTEN 0      4096                                                             0.0.0.0:9980                      0.0.0.0:*    users:(("conmon",pid=28277,fd=5))

After running firewall-cmd --reload then podman network reload --all, I'm able to connect via the bound port again.

Luap99 commented 1 year ago

something is messing with your iptables, you need to find out what

Patrick-Hogan commented 1 year ago

I've been having the same or a very similar issue: sometimes, seemingly random rootful containers will be inaccessible over their published ports despite starting correctly, adding the correct iptables rules and showing as listening via ss. I've only seen this with containers/pods started through systemd.

I just tracked it down to incorrect iptables rules that appear to be added by podman (not sure when) causing traffic to be routed to an old container ID.

I was able to solve by running:

iptables -t nat -F NETAVARK-HOSTPORT-DNAT
iptables -t nat -F NETAVARK-DN-1D8721804F16F
podman network reload --all

I strongly suspect this is a podman/systemd issue, not something else interfering w/ iptables rules (I run ufw, but that's all that sits on top of iptables, and all of the borked rules were in the two NETAVARK chains), but I"m not sure what conditions actually cause the improper state (maybe failed startups after reboot? I've sometimes seen w/o rebooting).

Currently running on arch w/: podman version 4.5.1 systemd version 253.5-1

Luap99 commented 1 year ago

Can you add --log-level debug to all your podman commands in the system units, this will print all iptables rules. Then if the issues occurs, check all system unit logs and provide me the output, if it actually lists wrong iptables rules then we have a serious problem.

Patrick-Hogan commented 1 year ago

Done. No idea when it will decide to fail again, though--sometimes it's weeks between instances.

saper commented 1 year ago

I have something similar to this on RHEL 8 with podman-4.2.0-6.module+el8.7.0+17498+a7f63b89.x86_64 - after reboot the rootful containers do not get any outbound connectivity - DNS does not resolve and one can only ping the podman host by IP, but nothing else.

I have checked this with tcpdump and I suspect no masquerading is done, despite iptables being seemingly correct.

Before I open (probably another) issue, few questions:

1) can you check if you get outbound connectivity in your broken containers? I am using something like podman exec -ti _container_ /bin/bash -c "getent host _somehostnametoresolve_" and I run "tcpdump -i any not port 22" in a second window to watch the traffic NOT go out (there is a packet coming out of the internal IP address, but not the global IP address, there should be both)

2) do you guys run NetworkManager? I noticed that systemd starts podman socket before NetworkManager and when NetworkManager starts up it finds "cni-podman0" already as unmanaged and does its "magic".

The difference here - we have CNI and you have netavark, but if this boils down to NetworkManager vs iptables (my current theory), then this difference is not important...

saper commented 1 year ago

so, in my case, the issue was that IPv4 forwarding has been turned off after reboot... some important people put net.ipv4.ip_forward=0 in /etc/sysctl.conf and this breaks container networking, obviously...

github-actions[bot] commented 1 year ago

A friendly reminder that this issue had no activity for 30 days.

rhatdan commented 1 year ago

@Luap99 any update on this issue?

Luap99 commented 1 year ago

I am going to close this as I cannot reproduce. As mentioned before do not set the restart policy on the container and just rely on the systemd restart policy otherwise the restart policies will conflict if the container exists.