containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
22.77k stars 2.32k forks source link

Still [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied for some containers #20886

Closed luckylinux closed 8 months ago

luckylinux commented 8 months ago

Issue Description

Podman rootless.

When starting some containers I get

[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

For instance eclipse-mosquitto container image gives this error, while homeassistant container image works correctly (didn't do anything yet, but it boots and web interface can be accessed normally).

See for the Description on Debian Bookworm (Stable) with Podman 4.3.1 (https://github.com/containers/podman/issues/3024#issuecomment-1834480475).

What I tried:

On Debian Trixie I also tried to rebuild the eclipse-mosquitto container image using the provided Dockerfile and docker-entrypoint.sh file, as well as the required basic configuration from https://github.com/eclipse/mosquitto/tree/15292b20b0894ec7c5c3d47e4b22ee9d89f91132/docker/2.0

I tried, following the advices on https://github.com/containers/podman/issues/3024, to set "OOMScoreAdjust=" (nothing) in both /etc/systemd/system/user@.service and /etc/systemd/system/user@.service.d/override.conf but it doesn't help.

Not sure if these files are not used by Debian or what the issue really is.

root@Rock5B-01:~# cat /etc/systemd/system/user@.service

#  SPDX-License-Identifier: LGPL-2.1-or-later
#
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.

[Unit]
Description=User Manager for UID %i
Documentation=man:user@.service(5)
After=user-runtime-dir@%i.service dbus.service systemd-oomd.service
Requires=user-runtime-dir@%i.service
IgnoreOnIsolate=yes

[Service]
User=%i
PAMName=systemd-user
Type=notify
ExecStart=/lib/systemd/systemd --user
Slice=user-%i.slice
KillMode=mixed
Delegate=pids memory cpu
TasksMax=infinity
TimeoutStopSec=120s
KeyringMode=inherit
#OOMScoreAdjust=100

root@Rock5B-01:~# cat /etc/systemd/system/user@.service.d/override.conf

[Service]
OOMScoreAdjust=

Steps to reproduce the issue

Steps to reproduce the issue

  1. Install Debian Bookworm or Debian Trixie on ARM64/AARCH64 Architecture (used Rock 5B SBC)
  2. Install podman & podman-compose
  3. Build updated kernel & ZFS module
  4. Setup podman environment

I put here the script that I roughly used to do the setup. Some later manual fixes might have been required. E.g. subuid / subgid generation and unique entries in /etc/subgid and /etc/subuid ranges and starting values need to be manually adjusted. In case of errors, multiple entries might be placed in /etc/fstab.

#!/bin/bash

# Exit in case of error
#set -e

# Setup storage
setup_storage() {
    local lpath=$1
}

# Setup Mountpoint
setup_mountpoint() {
    local lpath=$1
}

# Umount if mounted
umount_if_mounted() {
    local mp=$1

    if mountpoint -q "${mp}"
    then
    umount ${mp}
    fi
}

# List subuid / subgid
list_subuid_subgid() {
     local SUBUID=/etc/subuid
     local SUBGID=/etc/subgid

     for i in $SUBUID $SUBGID; do [[ -f "$i" ]] || { echo "ERROR: $i does not exist, but is required."; exit 1; }; done
     [[ -n "$1" ]] && USERS=$1 || USERS=$(awk -F : '{x=x " " $1} END{print x}' $SUBUID)
     for i in $USERS; do
        awk -F : "\$1 ~ /$i/ {printf(\"%-16s sub-UIDs: %6d..%6d (%6d)\", \$1 \",\", \$2, \$2+\$3, \$3)}" $SUBUID
        awk -F : "\$1 ~ /$i/ {printf(\", sub-GIDs: %6d..%6d (%6d)\", \$2, \$2+\$3, \$3)}" $SUBGID
        echo ""
     done
}

# Define user
# User name
user=${1:-'podman-test'}

# Storage Path
storage=${2:-'zdata/PODMAN-TEST'}

# Mode (zfs / zvol)
mode=${3:-'zfs'}

# ZVOL FS (if type=zfs)
#fs=${3:-'ext4'}

# Define datasets
datasets=()
datasets+=("BUILD")
datasets+=("CERTIFICATES")
datasets+=("COMPOSE")
datasets+=("CONFIG")
datasets+=("LOG")
datasets+=("ROOT")
datasets+=("DATA")
datasets+=("IMAGES")
datasets+=("STORAGE")
datasets+=("VOLUMES")
datasets+=("CACHE")
datasets+=("LOCAL")

# Define ZVOL sizes in GB if applicable
# Be VERY generous with the allocations since no reservation of space is made
zsizes=()
zsizes+=("128G") # BUILD
zsizes+=("16G")  # CERTIFICATES
zsizes+=("16G")  # COMPOSE
zsizes+=("16G")  # CONFIG
zsizes+=("128G") # LOG
zsizes+=("128G") # ROOT
zsizes+=("256G") # DATA
zsizes+=("128G") # IMAGES
zsizes+=("256G") # STORAGE
zsizes+=("256G") # VOLUMES
zsizes+=("128G") # CACHE
zsizes+=("128G") # LOCAL

# Setup container user
touch /etc/{subgid,subuid}
useradd -c “Podman” -s /bin/bash $user
passwd -d $user
usermod --add-subuids 100000-165535 --add-subgids 100000-165535 $user
passwd $user

nano /etc/subuid
nano /etc/subgid

# Create Root storage
zfs create -o compression=lz4 -o canmount=on ${storage}

# Allow over-subscribind in case of ZVOL
if [ "$mode" == "zvol"  ]
then
    zfs set refreservation=none ${storage}
else
    zfs set canmount=on ${storage}
fi

echo "# ${user} BIND Mounts" >> /etc/fstab
echo "/home/${user}/config /home/${user}/.config none defaults,rbind 0 0" >> /etc/fstab
mkdir -p "/home/${user}"
chattr -i "/home/${user}"
mkdir -p "/home/${user}/.config"

# Ensure proper permissions for config folder
chown -R $user:$user /home/${user}/.config

# Create Datasets
for dataset in "${datasets[@]}"
do
    # Convert dataset name to lowercase mountpoint
    lname=${dataset,,}

        # Get name
        name="${storage}/${dataset}"

    # Create storage for image directory
    mkdir -p /home/${user}/${lname}/
    umount_if_mounted /home/${user}/${lname}/
    chattr -i /home/${user}/${lname}/
    chown -R $user:$user /home/${user}/${lname}/
    chattr +i /home/${user}/${lname}/

    if [ "$mode" == "zfs"  ]
    then
         # Create dataset
         zfs create -o compression=lz4 ${name}

         # Add FSTAB entry
         echo "/${name} /home/${user}/${lname} none defaults,rbind 0 0" >> /etc/fstab

             # Mount dataset
             zfs mount ${name}

         # Wait a bit
         sleep 1
        elif [ "$mode" == "zvol" ]
    then
         # Get ZVOL size
             zsize="${zsizes[$counter]}"

         # Create ZVOL
         zfs create -s -V ${zsize} ${name}

         # Create EXT4 Filesystem
             mkfs.ext4 /dev/zvol/${name}

         # Wait a bit
         sleep 1

         # Add FSTAB entry
             echo "/dev/zvol/${name} /home/${user}/${lname} ext4 defaults,nofail,x-systemd.automount 0 0" >> /etc/fstab
    else
         echo "MODE is invalid. It should either be <zfs> or <zvol>. Current value is <$mode>"
         echo "Aborting ..."
         exit;
    fi

    # Reload systemd to make use of new FSTAB
    systemctl daemon-reload

    # Mount according to FSTAB
    mount /home/${user}/${lname}/

    # Ensure proper permissions
    chown -R $user:$user /home/${user}/${lname}/

        # Increment counter
        counter=$((counter+1))
done

# Save Current Path
scriptspath=$(pwd)

# Install requirements
apt install --yes sudo

# Install podman
apt -y install podman

# Install podman-compose
apt -y install python3 python3-pip
#pip3 install podman-compose # Use latest version
#pip3 install https://github.com/containers/podman-compose/archive/refs/tags/v0.1.10.tar.gz # Use legacy version

# Allow unprivileged ports <1024 for rootless install
echo "net.ipv4.ip_unprivileged_port_start=80" >> /etc/sysctl.conf

# Allow unprivileged network access
echo "kernel.unprivileged_userns_clone=1" >> /etc/sysctl.d/userns.conf

# Enable CGROUPS v2
# For Rock 5B SBC needs to be manually configured in /boot/mk_extlinux script
echo "Please add <systemd.unified_cgroup_hierarchy=1> to /etc/default/kernel-cmdline"
read -p "Press ENTER once ready" confirmation
nano /etc/default/kernel-cmdline

# Automatically mount ZFS datasets
zfs mount -a
sleep 2

# Automatically bind-mount remaining datasets
mount -a

# Create folder for running processes
userid=$(id -u $user)
mkdir -p /var/run/user/${userid}
chown -R $user:$user /var/run/user/${userid}
#su $user

# Populate config directory
mkdir -p /home/${user}/.config/containers
cd /home/${user}/.config/containers
wget https://src.fedoraproject.org/rpms/containers-common/raw/main/f/storage.conf -O storage.conf
wget https://src.fedoraproject.org/rpms/containers-common/raw/main/f/registries.conf -O registries.conf
wget https://src.fedoraproject.org/rpms/containers-common/raw/main/f/default-policy.json -O default-policy.json

# Setup folders and set correct permissions
chown -R $user:$user /home/$user

# Set
echo "export XDG_RUNTIME_DIR=/run/user/${userid}" >> /home/$user/.bashrc
echo "export XDG_RUNTIME_DIR=/run/user/${userid}" >> /home/$user/.bash_profile

# Change some configuration
#sed -i "s/^runroot = \"/run/containers/storage\"/runroot = \"/var/run/user/${userid}\"/g" storage.conf
#sed -i "s/^graphroot = \"/var/lib/containers/storage\"/graphroot = \"${storage}\"/g" storage.conf
#sed -i "s/^rootless_storage_path = \"\$HOME/.local/share/containers/storage\"/rootless_storage_path = \"${storage}\"/g" storage.conf
#sed -Ei "s|^runroot = \"/run/containers/storage\"|#runroot = \"/var/run/user/${userid}\"|g" storage.conf
sed -Ei "s|^runroot = \"/run/containers/storage\"|#runroot = \"/run/user/${userid}\"|g" storage.conf
sed -Ei "s|^graphroot = \"/var/lib/containers/storage\"|#graphroot = \"${storage}\"|g" storage.conf
sed -Ei "s|^# rootless_storage_path = \"\$HOME/.local/share/containers/storage\"|rootless_storage_path = \"${storage}\"|g" storage.conf
sed -Ei "s|^#mount_program = \"/usr/bin/fuse-overlayfs\"|mount_program = \"/usr/bin/fuse-overlayfs\"|g" storage.conf

# Enable lingering sessions
loginctl enable-linger ${userid}

# Upgrade other parts of the system
apt --yes dist-upgrade

# Rebuild initramfs
update-initramfs -k all  -u

# Setup Systemd
# Source: https://salsa.debian.org/debian/libpod/-/blob/debian/sid/contrib/systemd/README.md#user-podman-service-run-as-given-user-aka-rootless
# Need to execute as podman user
# Setup files
sudo -u $user mkdir -p /home/$user/.config/systemd/user
sudo -u $user cp /lib/systemd/user/podman.service /home/$user/.config/systemd/user/
sudo -u $user cp /lib/systemd/user/podman.socket /home/$user/.config/systemd/user/
sudo -u $user cp /lib/systemd/user/podman-auto-update.timer /home/$user/.config/systemd/user/
sudo -u $user cp /lib/systemd/user/podman-auto-update.service /home/$user/.config/systemd/user/
sudo -u $user cp /lib/systemd/user/podman-restart.service /home/$user/.config/systemd/user/

# Install additionnal packages
apt --yes install uidmap fuse-overlayfs slirp4netns

# Disable root-level services
systemctl disable podman-restart.service
systemctl disable podman.socket
systemctl disable podman-auto-update

# Enable user-level services
sudo -u $user systemctl --user enable podman.socket
sudo -u $user systemctl --user start podman.socket

sudo -u $user systemctl --user enable podman.service
sudo -u $user systemctl --user start podman.service

sudo -u $user systemctl --user enable podman-restart.service
sudo -u $user systemctl --user start podman-restart.service

sudo -u $user systemctl --user enable podman-auto-update.service
sudo -u $user systemctl --user start podman-auto-update.service

sudo -u $user systemctl --user status podman.socket podman.service podman-restart.service podman-auto-update.service

# https://github.com/containers/podman/issues/3024#issuecomment-1742105831 ,  https://github.com/containers/podman/issues/3024#issuecomment-1762708730
mkdir -p /etc/systemd/system/user@.service.d
cd /etc/systemd/system/user@.service.d
echo "[Service]" > override.conf
echo "OOMScoreAdjust=" >> override.conf

# Prevent Systemd from auto restarting Podman Containers too quickly and timing out
cd $scriptspath
mkdir -p /etc/systemd/user.conf.d/
cp podman.systemd.conf /etc/systemd/user.conf.d/podman.conf

# Install podman-compose
aptitude -y install podman-compose

Called like:

Describe the results you received

When starting some containers such as eclipse-mosquitto I get

[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

Journalctl log on Debian Trixie as root

root@Rock5B-01:~# journalctl -f -t conmon
Dec 03 06:48:01 Rock5B-01 conmon[4904]: conmon 09445f7d86bf0d1afca6 <ndebug>: terminal_ctrl_fd: 14
Dec 03 06:48:01 Rock5B-01 conmon[4904]: conmon 09445f7d86bf0d1afca6 <ndebug>: winsz read side: 17, winsz write side: 17
Dec 03 06:48:01 Rock5B-01 conmon[4904]: conmon 09445f7d86bf0d1afca6 <ndebug>: container PID: 4918
Dec 03 06:48:01 Rock5B-01 conmon[4904]: conmon 09445f7d86bf0d1afca6 <ninfo>: container 4918 exited with status 1
Dec 03 06:48:13 Rock5B-01 conmon[5171]: conmon cb12f33ddb58167892a6 <ndebug>: failed to write to /proc/self/oom_score_adj: Permission denied
Dec 03 06:48:13 Rock5B-01 conmon[5172]: conmon cb12f33ddb58167892a6 <ndebug>: addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/14/attach}
Dec 03 06:48:13 Rock5B-01 conmon[5172]: conmon cb12f33ddb58167892a6 <ndebug>: terminal_ctrl_fd: 14
Dec 03 06:48:13 Rock5B-01 conmon[5172]: conmon cb12f33ddb58167892a6 <ndebug>: winsz read side: 17, winsz write side: 17
Dec 03 06:48:13 Rock5B-01 conmon[5172]: conmon cb12f33ddb58167892a6 <ndebug>: container PID: 5186
Dec 03 06:48:13 Rock5B-01 conmon[5172]: conmon cb12f33ddb58167892a6 <ninfo>: container 5186 exited with status 3
Dec 03 06:49:58 Rock5B-01 conmon[5406]: conmon cb12f33ddb58167892a6 <ndebug>: failed to write to /proc/self/oom_score_adj: Permission denied
Dec 03 06:49:58 Rock5B-01 conmon[5407]: conmon cb12f33ddb58167892a6 <ndebug>: addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/13/attach}
Dec 03 06:49:58 Rock5B-01 conmon[5407]: conmon cb12f33ddb58167892a6 <ndebug>: terminal_ctrl_fd: 13
Dec 03 06:49:58 Rock5B-01 conmon[5407]: conmon cb12f33ddb58167892a6 <ndebug>: winsz read side: 16, winsz write side: 16
Dec 03 06:49:58 Rock5B-01 conmon[5407]: conmon cb12f33ddb58167892a6 <ndebug>: container PID: 5421
Dec 03 06:49:58 Rock5B-01 conmon[5407]: conmon cb12f33ddb58167892a6 <ninfo>: container 5421 exited with status 3
Dec 03 07:05:56 Rock5B-01 conmon[15000]: conmon ecbeb1657291c3abc080 <nwarn>: runtime stderr: runc create failed: unable to start container process: exec: "/docker-entrypoint.sh": permission denied
Dec 03 07:05:56 Rock5B-01 conmon[15000]: conmon ecbeb1657291c3abc080 <error>: Failed to create container: exit status 1
Dec 03 07:05:58 Rock5B-01 conmon[15206]: conmon ecbeb1657291c3abc080 <nwarn>: runtime stderr: runc create failed: unable to start container process: exec: "/docker-entrypoint.sh": permission denied
Dec 03 07:05:58 Rock5B-01 conmon[15206]: conmon ecbeb1657291c3abc080 <error>: Failed to create container: exit status 1
Dec 03 07:08:13 Rock5B-01 conmon[15979]: conmon ccf2cc15c9acaed9d947 <ndebug>: failed to write to /proc/self/oom_score_adj: Permission denied
Dec 03 07:08:13 Rock5B-01 conmon[15980]: conmon ccf2cc15c9acaed9d947 <ndebug>: addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/13/attach}
Dec 03 07:08:13 Rock5B-01 conmon[15980]: conmon ccf2cc15c9acaed9d947 <ndebug>: terminal_ctrl_fd: 13
Dec 03 07:08:13 Rock5B-01 conmon[15980]: conmon ccf2cc15c9acaed9d947 <ndebug>: winsz read side: 16, winsz write side: 16
Dec 03 07:08:14 Rock5B-01 conmon[15980]: conmon ccf2cc15c9acaed9d947 <ndebug>: container PID: 15994
Dec 03 07:08:14 Rock5B-01 conmon[15980]: conmon ccf2cc15c9acaed9d947 <ninfo>: container 15994 exited with status 3
Dec 03 07:09:49 Rock5B-01 conmon[16211]: conmon ccf2cc15c9acaed9d947 <ndebug>: failed to write to /proc/self/oom_score_adj: Permission denied
Dec 03 07:09:49 Rock5B-01 conmon[16212]: conmon ccf2cc15c9acaed9d947 <ndebug>: addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/13/attach}
Dec 03 07:09:49 Rock5B-01 conmon[16212]: conmon ccf2cc15c9acaed9d947 <ndebug>: terminal_ctrl_fd: 13
Dec 03 07:09:49 Rock5B-01 conmon[16212]: conmon ccf2cc15c9acaed9d947 <ndebug>: winsz read side: 16, winsz write side: 16
Dec 03 07:09:49 Rock5B-01 conmon[16212]: conmon ccf2cc15c9acaed9d947 <ndebug>: container PID: 16226
Dec 03 07:09:49 Rock5B-01 conmon[16212]: conmon ccf2cc15c9acaed9d947 <ninfo>: container 16226 exited with status 3
Dec 03 07:10:14 Rock5B-01 conmon[16440]: conmon ccf2cc15c9acaed9d947 <ndebug>: failed to write to /proc/self/oom_score_adj: Permission denied
Dec 03 07:10:14 Rock5B-01 conmon[16442]: conmon ccf2cc15c9acaed9d947 <ndebug>: addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/13/attach}
Dec 03 07:10:14 Rock5B-01 conmon[16442]: conmon ccf2cc15c9acaed9d947 <ndebug>: terminal_ctrl_fd: 13
Dec 03 07:10:14 Rock5B-01 conmon[16442]: conmon ccf2cc15c9acaed9d947 <ndebug>: winsz read side: 16, winsz write side: 16
Dec 03 07:10:14 Rock5B-01 conmon[16442]: conmon ccf2cc15c9acaed9d947 <ndebug>: container PID: 16456
Dec 03 07:10:14 Rock5B-01 conmon[16442]: conmon ccf2cc15c9acaed9d947 <ninfo>: container 16456 exited with status 3

Journalctl on Debian Trixie as podman-test user

podman-test@Rock5B-01:~/compose/mosquitto01$ journalctl -f -q -t conmon
Dec 03 07:10:14 Rock5B-01 conmon[16442]: conmon ccf2cc15c9acaed9d947 <ndebug>: terminal_ctrl_fd: 13
Dec 03 07:10:14 Rock5B-01 conmon[16442]: conmon ccf2cc15c9acaed9d947 <ndebug>: winsz read side: 16, winsz write side: 16
Dec 03 07:10:14 Rock5B-01 conmon[16442]: conmon ccf2cc15c9acaed9d947 <ndebug>: container PID: 16456
Dec 03 07:10:14 Rock5B-01 conmon[16442]: conmon ccf2cc15c9acaed9d947 <ninfo>: container 16456 exited with status 3
Dec 03 07:37:22 Rock5B-01 conmon[16845]: conmon ccf2cc15c9acaed9d947 <ndebug>: failed to write to /proc/self/oom_score_adj: Permission denied
Dec 03 07:37:22 Rock5B-01 conmon[16846]: conmon ccf2cc15c9acaed9d947 <ndebug>: addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/13/attach}
Dec 03 07:37:22 Rock5B-01 conmon[16846]: conmon ccf2cc15c9acaed9d947 <ndebug>: terminal_ctrl_fd: 13
Dec 03 07:37:22 Rock5B-01 conmon[16846]: conmon ccf2cc15c9acaed9d947 <ndebug>: winsz read side: 16, winsz write side: 16
Dec 03 07:37:22 Rock5B-01 conmon[16846]: conmon ccf2cc15c9acaed9d947 <ndebug>: container PID: 16860
Dec 03 07:37:22 Rock5B-01 conmon[16846]: conmon ccf2cc15c9acaed9d947 <ninfo>: container 16860 exited with status 3

Output of podman start with debug informatio for the affected container

podman-test@Rock5B-01:~/compose/mosquitto01$ podman start mosquitto01 --log-level debug
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called start.PersistentPreRunE(podman start mosquitto01 --log-level debug) 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /home/podman-test/storage/libpod/bolt_state.db 
DEBU[0000] systemd-logind: Unknown object '/'.          
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /home/podman-test/storage   
DEBU[0000] Using run root /run/user/1003/containers     
DEBU[0000] Using static dir /home/podman-test/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1003/libpod/tmp      
DEBU[0000] Using volume path /home/podman-test/storage/volumes 
DEBU[0000] Using transient store: false                 
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs 
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false 
DEBU[0000] Initializing event backend journald          
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument 
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument 
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument 
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument 
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument 
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument 
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument 
DEBU[0000] Using OCI runtime "/usr/bin/crun"            
INFO[0000] Setting parallel job count to 25             
DEBU[0000] Made network namespace at /run/user/1003/netns/netns-3bc1e8ef-f24b-61e6-2b22-97ce83d5119f for container ccf2cc15c9acaed9d947a1f961de5ee8d9bfdfa297b5a2bdca80ce7233e27399 
DEBU[0000] creating rootless network namespace with name "rootless-netns-d0df4b4b844fdb2d7ccf" 
DEBU[0000] Ignoring global metacopy option, the mount program doesn't support it 
DEBU[0000] overlay: mount_data=lowerdir=/home/podman-test/storage/overlay/l/5PS6ZPBHJUNZVPI4DNIFTGEH67:/home/podman-test/storage/overlay/l/A2YKNFXKVR2WC2MQ2N7DUOWZFL:/home/podman-test/storage/overlay/l/L3ZHF4MHYQR4GV5BHNBIGRL6FL,upperdir=/home/podman-test/storage/overlay/4631a2e69b377459bd8dfa60b8be87a3e4db25652af7eb5478f89c1722a01e8c/diff,workdir=/home/podman-test/storage/overlay/4631a2e69b377459bd8dfa60b8be87a3e4db25652af7eb5478f89c1722a01e8c/work,nodev 
DEBU[0000] slirp4netns command: /usr/bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp --enable-ipv6 -c -r 3 --netns-type=path /run/user/1003/netns/rootless-netns-d0df4b4b844fdb2d7ccf tap0 
DEBU[0000] Mounted container "ccf2cc15c9acaed9d947a1f961de5ee8d9bfdfa297b5a2bdca80ce7233e27399" at "/home/podman-test/storage/overlay/4631a2e69b377459bd8dfa60b8be87a3e4db25652af7eb5478f89c1722a01e8c/merged" 
DEBU[0000] Created root filesystem for container ccf2cc15c9acaed9d947a1f961de5ee8d9bfdfa297b5a2bdca80ce7233e27399 at /home/podman-test/storage/overlay/4631a2e69b377459bd8dfa60b8be87a3e4db25652af7eb5478f89c1722a01e8c/merged 
DEBU[0000] The path of /etc/resolv.conf in the mount ns is "/etc/resolv.conf" 
DEBU[0000] Successfully loaded network mosquitto01_podman: &{mosquitto01_podman ee020b3dafa49c3f916d2fd384712e4c3a063b3f38143340f59b15526f2fa28b bridge podman1 2023-12-02 19:20:23.935021814 +0000 UTC [{{{10.89.0.0 ffffff00}} 10.89.0.1 <nil>}] [] false false true [] map[com.docker.compose.project:mosquitto01 io.podman.compose.project:mosquitto01] map[] map[driver:host-local]} 
DEBU[0000] Successfully loaded 2 networks               
[DEBUG netavark::network::validation] "Validating network namespace..."
[DEBUG netavark::commands::setup] "Setting up..."
[INFO  netavark::firewall] Using iptables firewall driver
[DEBUG netavark::network::bridge] Setup network mosquitto01_podman
[DEBUG netavark::network::bridge] Container interface name: eth0 with IP addresses [10.89.0.11/24]
[DEBUG netavark::network::bridge] Bridge name: podman1 with IP addresses [10.89.0.1/24]
[DEBUG netavark::network::core_utils] Setting sysctl value for net.ipv4.ip_forward to 1
[DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv6/conf/eth0/autoconf to 0
[INFO  netavark::network::netlink] Adding route (dest: 0.0.0.0/0 ,gw: 10.89.0.1, metric 100)
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-ED0EA2494DC9C created on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_FORWARD created on table filter
[DEBUG netavark::firewall::varktables::helpers] rule -d 10.89.0.0/24 -j ACCEPT exists on table nat and chain NETAVARK-ED0EA2494DC9C
[DEBUG netavark::firewall::varktables::helpers] rule -d 10.89.0.0/24 -j ACCEPT created on table nat and chain NETAVARK-ED0EA2494DC9C
[DEBUG netavark::firewall::varktables::helpers] rule ! -d 224.0.0.0/4 -j MASQUERADE exists on table nat and chain NETAVARK-ED0EA2494DC9C
[DEBUG netavark::firewall::varktables::helpers] rule ! -d 224.0.0.0/4 -j MASQUERADE created on table nat and chain NETAVARK-ED0EA2494DC9C
[DEBUG netavark::firewall::varktables::helpers] rule -s 10.89.0.0/24 -j NETAVARK-ED0EA2494DC9C exists on table nat and chain POSTROUTING
[DEBUG netavark::firewall::varktables::helpers] rule -s 10.89.0.0/24 -j NETAVARK-ED0EA2494DC9C created on table nat and chain POSTROUTING
[DEBUG netavark::firewall::varktables::helpers] rule -d 10.89.0.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT exists on table filter and chain NETAVARK_FORWARD
[DEBUG netavark::firewall::varktables::helpers] rule -d 10.89.0.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT created on table filter and chain NETAVARK_FORWARD
[DEBUG netavark::firewall::varktables::helpers] rule -s 10.89.0.0/24 -j ACCEPT exists on table filter and chain NETAVARK_FORWARD
[DEBUG netavark::firewall::varktables::helpers] rule -s 10.89.0.0/24 -j ACCEPT created on table filter and chain NETAVARK_FORWARD
[DEBUG netavark::network::core_utils] Setting sysctl value for net.ipv4.conf.podman1.route_localnet to 1
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-SETMARK created on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-MASQ created on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-DN-ED0EA2494DC9C created on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-DNAT created on table nat
[DEBUG netavark::firewall::varktables::helpers] rule -j MARK  --set-xmark 0x2000/0x2000 exists on table nat and chain NETAVARK-HOSTPORT-SETMARK
[DEBUG netavark::firewall::varktables::helpers] rule -j MARK  --set-xmark 0x2000/0x2000 created on table nat and chain NETAVARK-HOSTPORT-SETMARK
[DEBUG netavark::firewall::varktables::helpers] rule -j MASQUERADE -m comment --comment 'netavark portfw masq mark' -m mark --mark 0x2000/0x2000 exists on table nat and chain NETAVARK-HOSTPORT-MASQ
[DEBUG netavark::firewall::varktables::helpers] rule -j MASQUERADE -m comment --comment 'netavark portfw masq mark' -m mark --mark 0x2000/0x2000 created on table nat and chain NETAVARK-HOSTPORT-MASQ
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-SETMARK -s 10.89.0.0/24 -p tcp --dport 1883 exists on table nat and chain NETAVARK-DN-ED0EA2494DC9C
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-SETMARK -s 10.89.0.0/24 -p tcp --dport 1883 created on table nat and chain NETAVARK-DN-ED0EA2494DC9C
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-SETMARK -s 127.0.0.1 -p tcp --dport 1883 exists on table nat and chain NETAVARK-DN-ED0EA2494DC9C
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-SETMARK -s 127.0.0.1 -p tcp --dport 1883 created on table nat and chain NETAVARK-DN-ED0EA2494DC9C
[DEBUG netavark::firewall::varktables::helpers] rule -j DNAT -p tcp --to-destination 10.89.0.11:1883 --destination-port 1883 exists on table nat and chain NETAVARK-DN-ED0EA2494DC9C
[DEBUG netavark::firewall::varktables::helpers] rule -j DNAT -p tcp --to-destination 10.89.0.11:1883 --destination-port 1883 created on table nat and chain NETAVARK-DN-ED0EA2494DC9C
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-SETMARK -s 10.89.0.0/24 -p tcp --dport 8885 exists on table nat and chain NETAVARK-DN-ED0EA2494DC9C
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-SETMARK -s 10.89.0.0/24 -p tcp --dport 8885 created on table nat and chain NETAVARK-DN-ED0EA2494DC9C
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-SETMARK -s 127.0.0.1 -p tcp --dport 8885 exists on table nat and chain NETAVARK-DN-ED0EA2494DC9C
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-SETMARK -s 127.0.0.1 -p tcp --dport 8885 created on table nat and chain NETAVARK-DN-ED0EA2494DC9C
[DEBUG netavark::firewall::varktables::helpers] rule -j DNAT -p tcp --to-destination 10.89.0.11:8885 --destination-port 8885 exists on table nat and chain NETAVARK-DN-ED0EA2494DC9C
[DEBUG netavark::firewall::varktables::helpers] rule -j DNAT -p tcp --to-destination 10.89.0.11:8885 --destination-port 8885 created on table nat and chain NETAVARK-DN-ED0EA2494DC9C
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-SETMARK -s 10.89.0.0/24 -p tcp --dport 9001 exists on table nat and chain NETAVARK-DN-ED0EA2494DC9C
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-SETMARK -s 10.89.0.0/24 -p tcp --dport 9001 created on table nat and chain NETAVARK-DN-ED0EA2494DC9C
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-SETMARK -s 127.0.0.1 -p tcp --dport 9001 exists on table nat and chain NETAVARK-DN-ED0EA2494DC9C
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-SETMARK -s 127.0.0.1 -p tcp --dport 9001 created on table nat and chain NETAVARK-DN-ED0EA2494DC9C
[DEBUG netavark::firewall::varktables::helpers] rule -j DNAT -p tcp --to-destination 10.89.0.11:9001 --destination-port 9001 exists on table nat and chain NETAVARK-DN-ED0EA2494DC9C
[DEBUG netavark::firewall::varktables::helpers] rule -j DNAT -p tcp --to-destination 10.89.0.11:9001 --destination-port 9001 created on table nat and chain NETAVARK-DN-ED0EA2494DC9C
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-DN-ED0EA2494DC9C -p tcp --dport 1883 -m comment --comment 'dnat name: mosquitto01_podman id: ccf2cc15c9acaed9d947a1f961de5ee8d9bfdfa297b5a2bdca80ce7233e27399' exists on table nat and chain NETAVARK-HOSTPORT-DNAT
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-DN-ED0EA2494DC9C -p tcp --dport 1883 -m comment --comment 'dnat name: mosquitto01_podman id: ccf2cc15c9acaed9d947a1f961de5ee8d9bfdfa297b5a2bdca80ce7233e27399' created on table nat and chain NETAVARK-HOSTPORT-DNAT
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-DN-ED0EA2494DC9C -p tcp --dport 8885 -m comment --comment 'dnat name: mosquitto01_podman id: ccf2cc15c9acaed9d947a1f961de5ee8d9bfdfa297b5a2bdca80ce7233e27399' exists on table nat and chain NETAVARK-HOSTPORT-DNAT
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-DN-ED0EA2494DC9C -p tcp --dport 8885 -m comment --comment 'dnat name: mosquitto01_podman id: ccf2cc15c9acaed9d947a1f961de5ee8d9bfdfa297b5a2bdca80ce7233e27399' created on table nat and chain NETAVARK-HOSTPORT-DNAT
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-DN-ED0EA2494DC9C -p tcp --dport 9001 -m comment --comment 'dnat name: mosquitto01_podman id: ccf2cc15c9acaed9d947a1f961de5ee8d9bfdfa297b5a2bdca80ce7233e27399' exists on table nat and chain NETAVARK-HOSTPORT-DNAT
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-DN-ED0EA2494DC9C -p tcp --dport 9001 -m comment --comment 'dnat name: mosquitto01_podman id: ccf2cc15c9acaed9d947a1f961de5ee8d9bfdfa297b5a2bdca80ce7233e27399' created on table nat and chain NETAVARK-HOSTPORT-DNAT
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-DNAT -m addrtype --dst-type LOCAL exists on table nat and chain PREROUTING
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-DNAT -m addrtype --dst-type LOCAL created on table nat and chain PREROUTING
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-DNAT -m addrtype --dst-type LOCAL exists on table nat and chain OUTPUT
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-DNAT -m addrtype --dst-type LOCAL created on table nat and chain OUTPUT
[DEBUG netavark::dns::aardvark] Spawning aardvark server
[DEBUG netavark::dns::aardvark] start aardvark-dns: ["systemd-run", "-q", "--scope", "--user", "/usr/lib/podman/aardvark-dns", "--config", "/run/user/1003/containers/networks/aardvark-dns", "-p", "53", "run"]
[DEBUG netavark::commands::setup] {
        "mosquitto01_podman": StatusBlock {
            dns_search_domains: Some(
                [
                    "dns.podman",
                ],
            ),
            dns_server_ips: Some(
                [
                    10.89.0.1,
                ],
            ),
            interfaces: Some(
                {
                    "eth0": NetInterface {
                        mac_address: "52:46:c5:ca:da:dd",
                        subnets: Some(
                            [
                                NetAddress {
                                    gateway: Some(
                                        10.89.0.1,
                                    ),
                                    ipnet: 10.89.0.11/24,
                                },
                            ],
                        ),
                    },
                },
            ),
        },
    }
[DEBUG netavark::commands::setup] "Setup complete"
DEBU[0000] rootlessport: time="2023-12-03T07:37:22Z" level=info msg="Starting parent driver" 
DEBU[0000] rootlessport: time="2023-12-03T07:37:22Z" level=info msg="opaque=map[builtin.readypipepath:/run/user/1003/libpod/tmp/rootlessport3244874395/.bp-ready.pipe builtin.socketpath:/run/user/1003/libpod/tmp/rootlessport3244874395/.bp.sock]" 
DEBU[0000] rootlessport: time="2023-12-03T07:37:22Z" level=info msg="Starting child driver in child netns (\"/proc/self/exe\" [rootlessport-child])" 
DEBU[0000] rootlessport: time="2023-12-03T07:37:22Z" level=info msg="Waiting for initComplete" 
DEBU[0000] rootlessport: time="2023-12-03T07:37:22Z" level=info msg="initComplete is closed; parent and child established the communication channel"
time="2023-12-03T07:37:22Z" level=info msg="Exposing ports [{ 1883 1883 1 tcp} { 8885 8885 1 tcp} { 9001 9001 1 tcp}]" 
DEBU[0000] rootlessport: time="2023-12-03T07:37:22Z" level=info msg=Ready 
DEBU[0000] rootlessport is ready                        
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode subscription 
DEBU[0000] Setting Cgroups for container ccf2cc15c9acaed9d947a1f961de5ee8d9bfdfa297b5a2bdca80ce7233e27399 to user.slice:libpod:ccf2cc15c9acaed9d947a1f961de5ee8d9bfdfa297b5a2bdca80ce7233e27399 
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d 
DEBU[0000] Workdir "/" resolved to host path "/home/podman-test/storage/overlay/4631a2e69b377459bd8dfa60b8be87a3e4db25652af7eb5478f89c1722a01e8c/merged" 
DEBU[0000] Created OCI spec for container ccf2cc15c9acaed9d947a1f961de5ee8d9bfdfa297b5a2bdca80ce7233e27399 at /home/podman-test/storage/overlay-containers/ccf2cc15c9acaed9d947a1f961de5ee8d9bfdfa297b5a2bdca80ce7233e27399/userdata/config.json 
DEBU[0000] /usr/bin/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c ccf2cc15c9acaed9d947a1f961de5ee8d9bfdfa297b5a2bdca80ce7233e27399 -u ccf2cc15c9acaed9d947a1f961de5ee8d9bfdfa297b5a2bdca80ce7233e27399 -r /usr/bin/runc -b /home/podman-test/storage/overlay-containers/ccf2cc15c9acaed9d947a1f961de5ee8d9bfdfa297b5a2bdca80ce7233e27399/userdata -p /run/user/1003/containers/overlay-containers/ccf2cc15c9acaed9d947a1f961de5ee8d9bfdfa297b5a2bdca80ce7233e27399/userdata/pidfile -n mosquitto01 --exit-dir /run/user/1003/libpod/tmp/exits --full-attach -s -l journald --log-level debug --syslog --conmon-pidfile /run/user/1003/containers/overlay-containers/ccf2cc15c9acaed9d947a1f961de5ee8d9bfdfa297b5a2bdca80ce7233e27399/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/podman-test/storage --exit-command-arg --runroot --exit-command-arg /run/user/1003/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1003/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg  --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /home/podman-test/storage/volumes --exit-command-arg --db-backend --exit-command-arg boltdb --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/usr/bin/fuse-overlayfs --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg ccf2cc15c9acaed9d947a1f961de5ee8d9bfdfa297b5a2bdca80ce7233e27399]"
INFO[0000] Running conmon under slice user.slice and unitName libpod-conmon-ccf2cc15c9acaed9d947a1f961de5ee8d9bfdfa297b5a2bdca80ce7233e27399.scope 
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

DEBU[0000] Received: 16860                              
INFO[0000] Got Conmon PID as 16846                      
DEBU[0000] Created container ccf2cc15c9acaed9d947a1f961de5ee8d9bfdfa297b5a2bdca80ce7233e27399 in OCI runtime 
DEBU[0000] Adding nameserver(s) from network status of '["10.89.0.1"]' 
DEBU[0000] Adding search domain(s) from network status of '["dns.podman"]' 
DEBU[0000] Starting container ccf2cc15c9acaed9d947a1f961de5ee8d9bfdfa297b5a2bdca80ce7233e27399 with command [/docker-entrypoint.sh /usr/sbin/mosquitto -c /mosquitto/config/mosquitto.conf] 
DEBU[0000] Started container ccf2cc15c9acaed9d947a1f961de5ee8d9bfdfa297b5a2bdca80ce7233e27399 
DEBU[0000] Notify sent successfully                     
mosquitto01
DEBU[0000] Called start.PersistentPostRunE(podman start mosquitto01 --log-level debug) 
DEBU[0000] Shutting down engines         

Describe the results you expected

Podman starting the container correctly.

podman info output

host:
  arch: arm64
  buildahVersion: 1.32.0
  cgroupControllers:
  - cpu
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon_2.1.6+ds1-1_arm64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.6, commit: unknown'
  cpuUtilization:
    idlePercent: 98.46
    systemPercent: 0.58
    userPercent: 0.96
  cpus: 8
  databaseBackend: boltdb
  distribution:
    codename: trixie
    distribution: debian
    version: unknown
  eventLogger: journald
  freeLocks: 2047
  hostname: Rock5B-01
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1003
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1003
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
  kernel: 6.6.3-1-arm64
  linkmode: dynamic
  logDriver: journald
  memFree: 14719148032
  memTotal: 16477868032
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns_1.4.0-5_arm64
      path: /usr/lib/podman/aardvark-dns
      version: aardvark-dns 1.4.0
    package: netavark_1.4.0-4_arm64
    path: /usr/lib/podman/netavark
    version: netavark 1.4.0
  ociRuntime:
    name: crun
    package: crun_1.12-1_arm64
    path: /usr/bin/crun
    version: |-
      crun version 1.12
      commit: ce429cb2e277d001c2179df1ac66a470f00802ae
      rundir: /run/user/1003/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt_0.0~git20231107.74e6f48-1_arm64
    version: |
      pasta unknown version
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: false
    path: /run/user/1003/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns_1.2.1-1_arm64
    version: |-
      slirp4netns version 1.2.1
      commit: 09e31e92fa3d2a1d3ca261adaeb012c8d75a8194
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.4
  swapFree: 0
  swapTotal: 0
  uptime: 1h 5m 43.00s (Approximately 0.04 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /home/podman-test/.config/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 0
    stopped: 1
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs_1.13-1_arm64
      Version: |-
        fusermount3 version: 3.14.0
        fuse-overlayfs: version 1.13-dev
        FUSE library version 3.14.0
        using FUSE kernel interface version 7.31
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /home/podman-test/storage
  graphRootAllocated: 269427478528
  graphRootUsed: 24883200
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Supports shifting: "true"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 11
  runRoot: /run/user/1003/containers
  transientStore: false
  volumePath: /home/podman-test/storage/volumes
version:
  APIVersion: 4.7.2
  Built: 0
  BuiltTime: Thu Jan  1 00:00:00 1970
  GitCommit: ""
  GoVersion: go1.21.3
  Os: linux
  OsArch: linux/arm64
  Version: 4.7.2

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

No

Additional environment details

uname -a Linux Rock5B-01 6.6.3-1-arm64 #1 SMP PREEMPT Debian 6.6.3-1 (2023-11-28) aarch64 GNU/Linux

Debian GNU/Linux AARCH64 / ARM64 Trixie (previously was Bookworm)

podman version Client: Podman Engine Version: 4.7.2 API Version: 4.7.2 Go Version: go1.21.3 Built: Thu Jan 1 00:00:00 1970 OS/Arch: linux/arm64

dpkg -s podman Package: podman Status: install ok installed Priority: optional Section: admin Installed-Size: 39476 Maintainer: Debian Go Packaging Team pkg-go-maintainers@lists.alioth.debian.org Architecture: arm64 Source: libpod Version: 4.7.2+ds1-2 Depends: conmon, crun | runc, golang-github-containers-common, libc6 (>= 2.34), libdevmapper1.02.1 (>= 2:1.02.97), libgpgme11 (>= 1.4.1), libseccomp2 (>= 2.5.0), libsqlite3-0 (>= 3.36.0), libsubid4 (>= 1:4.11.1) Recommends: buildah (>= 1.31), catatonit | tini | dumb-init, dbus-user-session, passt, slirp4netns, uidmap Suggests: containers-storage, docker-compose, iptables Conffiles: /etc/cni/net.d/87-podman-bridge.conflist a87c090f17c5274af878e7106e969b60 /etc/containers/libpod.conf ceec5a77b5f6a56d212eeed7b707d322 Description: tool to manage containers and pods Podman (the POD MANager) is a tool for managing containers and images, volumes mounted into those containers, and pods made from groups of containers. . At a high level, the scope of Podman and libpod is the following:

Additional information

Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting

luckylinux commented 8 months ago

Seems to be due to some SSL (mis / lack of) configuration. Seems to work now - See comments in https://github.com/eclipse/mosquitto/issues/2961

giuseppe commented 8 months ago

so can we close this issue or is there anything to fix/check in Podman?

luckylinux commented 8 months ago

From your point of view podman is maybe normal.

But to think that a permission error like this (which points to a systemd setting and crun/runc bugs as well) is actually due to some SSL configuration within a container.

That's totally counter intuitive in my view. The error is severely misleading.

giuseppe commented 8 months ago

I don't think the SSL configuration can affect the oom_score_adj value and the error you've reported here.

Maybe that caused a systemctl daemon-reload or something like that?

luckylinux commented 8 months ago

It was the ssl cert as indicated in the mosquitto log. That's why the podman error made no sense whatsoever. As soon as I regenerated proper certificates everything started working and the error disappeared

Luap99 commented 8 months ago

This is just a harmless log line from conmon, it is only logged at debug level. Basically conmon tries to make itself unkillable but this is not allowed because it cannot lower the oom score as rootless user. This is totally expected and not a problem. I guess we could try to make conmon aware of it and not log this line as rootless.

Just run without --log-level=debug then you will not see this message.

luckylinux commented 8 months ago

Yeah without --log-devel=debug I didn't see anything when I started the container. But then I could see it was immediately down. Hence the need for --log-level=debug in the first place.

From the other threads I thought that this failed to write to /proc/self/oom_score_adj: Permission denied was fatal, as they couldn't run the container otherwise. Re-reading the thread some people talk about "error" some say that the converter runs after it.

Luap99 commented 8 months ago

From your log the container is started, if it exited after that then it is because the process inside exited not because it failed to start so I suggest you look at your application logs.

There were fatal oom_score_adj error but they were because of the oci runtime hard failing as we requested an invalid value. conmon never hard failed because of this.

Anyway since you have it working now I am going to close this one.