containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
23.24k stars 2.37k forks source link

Unprivileged rootless podman pod container does not start, permission denied in /sys #22366

Closed antonkoenig closed 3 months ago

antonkoenig commented 5 months ago

Issue Description

We set up podman pods using kube YAML files and also we generate systemd files and use a simple setup which sometimes runs exec on running containers. Also some systemd timers are configured. The setup works fine when we downgrade to an old podman version (4.2.0) but it starts to fail with many permission denied and other errors, when we upgrade and use podman 4.4.x or 4.6.x.

We use local directory mounts, which use mapped ids. We use podman unshare to configure those. But the reported issue also happens for pods and containers which do not have any mounts.

I noticed, the merged directory path taken from the trace output does not contain files. I suppose that is normal for RHEL 8.9.

Sometimes podman processes get stuck.

Steps to reproduce the issue

Create a by the book podman kube pod container example and set up generated systemd files. It may work with another setup. I can't easily show our setup here. Nothing fancy. Only pods with some containers. We spin up containers containing postgresql or java spring or other java applications. Probably any container would fail. We do not use systemd inside containers.

Describe the results you received

The systemd service units report generic dependency errors. Example: Dependency failed for Podman container-<redacted>-pod-<redacted>.service.

When the container terminates fast, I can't exec into the container.

Sometimes when I can exec into the container, sometimes the container continues to work. That is odd.

I see error messages in the trace debug output. Files in the sys directory are missing.

time="2024-04-12T21:20:57+02:00" level=info msg="podman filtering at log level trace"
time="2024-04-12T21:20:57+02:00" level=debug msg="Called start.PersistentPreRunE(podman start --log-level=trace <redacted>-<redacted>)"
time="2024-04-12T21:20:57+02:00" level=debug msg="Using conmon: \"/usr/bin/conmon\""
time="2024-04-12T21:20:57+02:00" level=debug msg="Initializing boltdb state at /home/<redacted>/.local/share/containers/storage/libpod/bolt_state.db"
time="2024-04-12T21:20:57+02:00" level=debug msg="Using graph driver overlay"
time="2024-04-12T21:20:57+02:00" level=debug msg="Using graph root /home/<redacted>/.local/share/containers/storage"
time="2024-04-12T21:20:57+02:00" level=debug msg="Using run root /run/user/1500/containers"
time="2024-04-12T21:20:57+02:00" level=debug msg="Using static dir /home/<redacted>/.local/share/containers/storage/libpod"
time="2024-04-12T21:20:57+02:00" level=debug msg="Using tmp dir /run/user/1500/libpod/tmp"
time="2024-04-12T21:20:57+02:00" level=debug msg="Using volume path /home/<redacted>/.local/share/containers/storage/volumes"
time="2024-04-12T21:20:57+02:00" level=debug msg="Using transient store: false"
time="2024-04-12T21:20:57+02:00" level=debug msg="[graphdriver] trying provided driver \"overlay\""
time="2024-04-12T21:20:57+02:00" level=debug msg="Cached value indicated that overlay is supported"
time="2024-04-12T21:20:57+02:00" level=debug msg="Cached value indicated that overlay is supported"
time="2024-04-12T21:20:57+02:00" level=debug msg="Cached value indicated that metacopy is not being used"
time="2024-04-12T21:20:57+02:00" level=debug msg="Cached value indicated that native-diff is usable"
time="2024-04-12T21:20:57+02:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false"
time="2024-04-12T21:20:57+02:00" level=debug msg="Initializing event backend file"
time="2024-04-12T21:20:57+02:00" level=trace msg="found runtime \"/usr/bin/crun\""
time="2024-04-12T21:20:57+02:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument"
time="2024-04-12T21:20:57+02:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument"
time="2024-04-12T21:20:57+02:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument"
time="2024-04-12T21:20:57+02:00" level=trace msg="found runtime \"/usr/bin/runc\""
time="2024-04-12T21:20:57+02:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument"
time="2024-04-12T21:20:57+02:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument"
time="2024-04-12T21:20:57+02:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument"
time="2024-04-12T21:20:57+02:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument"
time="2024-04-12T21:20:57+02:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\""
time="2024-04-12T21:20:57+02:00" level=info msg="Setting parallel job count to 7"
time="2024-04-12T21:20:57+02:00" level=debug msg="Created root filesystem for container 7a89673da8f3b815350269347223e58792e511f40793d5d4cbff83db4d4255da at /home/<redacted>/.local/share/containers/storage/overlay/9b9c19bf1cf9b849091e38802890d6f58ea18843647bc9980405e7f83edf2807/merged"
time="2024-04-12T21:20:57+02:00" level=debug msg="Recreating container 7a89673da8f3b815350269347223e58792e511f40793d5d4cbff83db4d4255da in OCI runtime"
time="2024-04-12T21:20:57+02:00" level=debug msg="Successfully cleaned up container 7a89673da8f3b815350269347223e58792e511f40793d5d4cbff83db4d4255da"
time="2024-04-12T21:20:57+02:00" level=debug msg="Not modifying container 7a89673da8f3b815350269347223e58792e511f40793d5d4cbff83db4d4255da /etc/passwd"
time="2024-04-12T21:20:57+02:00" level=debug msg="Not modifying container 7a89673da8f3b815350269347223e58792e511f40793d5d4cbff83db4d4255da /etc/group"
time="2024-04-12T21:20:57+02:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription"
time="2024-04-12T21:20:57+02:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d"
time="2024-04-12T21:20:57+02:00" level=debug msg="Workdir \"/\" resolved to host path \"/home/<redacted>/.local/share/containers/storage/overlay/9b9c19bf1cf9b849091e38802890d6f58ea18843647bc9980405e7f83edf2807/merged\""
time="2024-04-12T21:20:57+02:00" level=debug msg="Created OCI spec for container 7a89673da8f3b815350269347223e58792e511f40793d5d4cbff83db4d4255da at /home/<redacted>/.local/share/containers/storage/overlay-containers/7a89673da8f3b815350269347223e58792e511f40793d5d4cbff83db4d4255da/userdata/config.json"
time="2024-04-12T21:20:57+02:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog"
time="2024-04-12T21:20:57+02:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c 7a89673da8f3b815350269347223e58792e511f40793d5d4cbff83db4d4255da -u 7a89673da8f3b815350269347223e58792e511f40793d5d4cbff83db4d4255da -r /usr/bin/crun -b /home/<redacted>/.local/share/containers/storage/overlay-containers/7a89673da8f3b815350269347223e58792e511f40793d5d4cbff83db4d4255da/userdata -p /run/user/1500/containers/overlay-containers/7a89673da8f3b815350269347223e58792e511f40793d5d4cbff83db4d4255da/userdata/pidfile -n <redacted>-<redacted> --exit-dir /run/user/1500/libpod/tmp/exits --full-attach -l k8s-file:/home/<redacted>/.local/share/containers/storage/overlay-containers/7a89673da8f3b815350269347223e58792e511f40793d5d4cbff83db4d4255da/userdata/ctr.log --log-level trace --syslog --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/run/user/1500/containers/overlay-containers/7a89673da8f3b815350269347223e58792e511f40793d5d4cbff83db4d4255da/userdata/oci-log --conmon-pidfile /run/user/1500/containers/overlay-containers/7a89673da8f3b815350269347223e58792e511f40793d5d4cbff83db4d4255da/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/<redacted>/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1500/containers --exit-command-arg --log-level --exit-command-arg trace --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1500/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg  --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /home/<redacted>/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg boltdb --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 7a89673da8f3b815350269347223e58792e511f40793d5d4cbff83db4d4255da]"
time="2024-04-12T21:20:57+02:00" level=info msg="Failed to add conmon to cgroupfs sandbox cgroup: creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/conmon: permission denied"
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

time="2024-04-12T21:20:57+02:00" level=debug msg="Received: 792104"
time="2024-04-12T21:20:57+02:00" level=info msg="Got Conmon PID as 792102"
time="2024-04-12T21:20:57+02:00" level=debug msg="Created container 7a89673da8f3b815350269347223e58792e511f40793d5d4cbff83db4d4255da in OCI runtime"
time="2024-04-12T21:20:57+02:00" level=debug msg="Starting container 7a89673da8f3b815350269347223e58792e511f40793d5d4cbff83db4d4255da with command [/usr/local/bin/docker-entrypoint.sh postgres -c config_file=/var/lib/postgresql/data/postgresql.conf]"
time="2024-04-12T21:20:57+02:00" level=debug msg="Started container 7a89673da8f3b815350269347223e58792e511f40793d5d4cbff83db4d4255da"
<redacted>-<redacted>
time="2024-04-12T21:20:57+02:00" level=debug msg="Called start.PersistentPostRunE(podman start --log-level=trace <redacted>-<redacted>)"
time="2024-04-12T21:20:57+02:00" level=debug msg="Shutting down engines"

Describe the results you expected

Pods and containers start and services work as intended, just like they do using an older podman version.

podman info output

#$ podman info
host:
  arch: amd64
  buildahVersion: 1.31.3
  cgroupControllers: []
  cgroupManager: cgroupfs
  cgroupVersion: v1
  conmon:
    package: conmon-2.1.8-1.module+el8.9.0+21525+acb5d821.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.8, commit: 866c877aef6f25a7c4fc1cce1680bb884a5f0997'
  cpuUtilization:
    idlePercent: 78.43
    systemPercent: 12.28
    userPercent: 9.29
  cpus: 2
  databaseBackend: boltdb
  distribution:
    distribution: '"rhel"'
    version: "8.9"
  eventLogger: file
  freeLocks: 2045
  hostname: <redacted>
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1500
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1500
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
  kernel: 4.18.0-513.24.1.el8_9.x86_64
  linkmode: dynamic
  logDriver: k8s-file
  memFree: 2952630272
  memTotal: 8059150336
  networkBackend: cni
  networkBackendInfo:
    backend: cni
    dns:
      package: podman-plugins-4.6.1-8.module+el8.9.0+21525+acb5d821.x86_64
      path: /usr/libexec/cni/dnsname
      version: |-
        CNI dnsname plugin
        version: 1.3.1
        commit: unknown
    package: containernetworking-plugins-1.3.0-8.module+el8.9.0+21525+acb5d821.x86_64
    path: /usr/libexec/cni
  ociRuntime:
    name: crun
    package: crun-1.8.7-1.module+el8.9.0+21525+acb5d821.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.8.7
      commit: 53a9996ce82d1ee818349bdcc64797a1fa0433c4
      rundir: /run/user/1500/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  pasta:
    executable: ""
    package: ""
    version: ""
  remoteSocket:
    exists: true
    path: /run/user/1500/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.1-1.module+el8.9.0+21525+acb5d821.x86_64
    version: |-
      slirp4netns version 1.2.1
      commit: 09e31e92fa3d2a1d3ca261adaeb012c8d75a8194
      libslirp: 4.4.0
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.2
  swapFree: 0
  swapTotal: 0
  uptime: 5h 3m 26.00s (Approximately 0.21 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  configFile: /home/<redacted>/.config/containers/storage.conf
  containerStore:
    number: 2
    paused: 0
    running: 2
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/<redacted>/.local/share/containers/storage
  graphRootAllocated: 34349252608
  graphRootUsed: 903487488
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 2
  runRoot: /run/user/1500/containers
  transientStore: false
  volumePath: /home/<redacted>/.local/share/containers/storage/volumes
version:
  APIVersion: 4.6.1
  Built: 1710329274
  BuiltTime: Wed Mar 13 12:27:54 2024
  GitCommit: ""
  GoVersion: go1.20.12
  Os: linux
  OsArch: linux/amd64
  Version: 4.6.1

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

No

Additional environment details

It is a RHEL 8.9 (fresh image install or upgraded using yum update) of current state. It is provisioned from a azure cloud gallery, which contains hardened RHEL images. The Microsoft security scanner software product is active. SELinux is active.

#$ hostnamectl
$ hostnamectl 
   Static hostname: <redacted>
         Icon name: computer-vm
           Chassis: vm
        Machine ID: <redacted>
           Boot ID: <redacted>
    Virtualization: microsoft
  Operating System: Red Hat Enterprise Linux 8.9 (Ootpa)
       CPE OS Name: cpe:/o:redhat:enterprise_linux:8::baseos
            Kernel: Linux 4.18.0-513.24.1.el8_9.x86_64
      Architecture: x86-64

#$ podman version
podman version
Client:       Podman Engine
Version:      4.6.1
API Version:  4.6.1
Go Version:   go1.20.12
Built:        Wed Mar 13 12:27:54 2024
OS/Arch:      linux/amd64

#$ crun --version
crun version 1.8.7
commit: 53a9996ce82d1ee818349bdcc64797a1fa0433c4
rundir: /run/user/1500/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL

#$ getenforce 
Enforcing

Additional information

The issue started somewhere between podman 4.2.0 and podman 4.4.0 since version 4.2.0 works. The runc runtime also works for the old podman version 4.2.0. I can downgrade to those and it will work fine.

rhatdan commented 5 months ago

Did you get avc messages.

sudo ausearch -m avc -ts recent

antonkoenig commented 5 months ago

Hello Daniel, unforunately the ausearch output is empty on one machine and on another machine the output only contains Microsoft OMS and SCX logfile related logrotate messages. I think that's unrelated.

paulcalabro commented 5 months ago

Wow, I thought I was going crazy. I'm actually experiencing pretty much the exact same error while upgrading from 4.4.1 to 4.6.1. I'm seeing permission denied error messages for "sys/fs/cgroup" when running rootless containers. It weirdly goes away when I set "--network=host". I'm configuring userns to "auto" as well. It also goes away if I leave the network setting alone and set userns to host. I also noticed slirp is not running. Same versions for API, slirp4netns, oci runtime,and conmon. I'm using RHEL 9.3, a different network backend, and cgroupv2. SELinux is currently in permissive mode.

dezza commented 5 months ago

22274 Its been going on for a long time.

But something has changed because it doesn't happen on fresh VM, so it must be a state that happens between updating several versions.

paulcalabro commented 5 months ago

I noticed that as well on a fresh install.

dezza commented 5 months ago

I noticed that as well on a fresh install.

I did a pacman system prune (and the other command) on my dev server between the earlier 4.7.2 (my pinned pkg) and the versions after that were affected while trying to get the newer versions to work but that didn't help.

I'm thinking there's stored something somewhere that affects it, but I'm not deleting the entire .local/ right now or moving it cuz the affected server is in use. Thats why I open an issue.

I can try backing it up and stopping all containers tomorrow, but I'm not hopeful that it will work..

Whats so odd is that systemd runs the services inferred from quadlet just fine.. Its ONLY podman run/exec/build

paulcalabro commented 5 months ago

I'm not sure if it's relevant, but when I run nsls on three servers (two working and one not), I notice the following:

paulcalabro commented 5 months ago

Just checked, they're all at 1.2.1 for slirp4netns.

dezza commented 5 months ago

update, nope didn't help. I'm at a loss where the issue is if its not in these temporary files.

[ct@cos ~]$ mv .local/share/containers/ local_share_containers
[ct@cos ~]$ podman run -it localhost/netshoot
Error: crun: sd-bus call: Interactive authentication required.: Permission denied: OCI permission denied
paulcalabro commented 4 months ago

@antonkoenig Do you use Centrify by any chance?

antonkoenig commented 4 months ago

@paulcalabro No, we don't use Centrify.

paulcalabro commented 4 months ago

I think I'm getting closer...

[pid 480980] openat(AT_FDCWD, "/sys/fs/cgroup/cgroup.controllers", O_RDONLY) = 9
[pid 480980] statx(9, "", AT_STATX_DONT_SYNC|AT_SYMLINK_NOFOLLOW|AT_EMPTY_PATH, STATX_SIZE, {stx_mask=STATX_BASIC_STATS|STATX_MNT_ID, stx_blksize=4096, stx_attributes=0, stx_nlink=1, stx_uid=65534, stx_gid=65534, stx_mode=S_IFREG|0444, stx_ino=4, stx_size=0, stx_blocks=0, stx_attributes_mask=STATX_ATTR_AUTOMOUNT|STATX_ATTR_MOUNT_ROOT|STATX_ATTR_DAX, stx_atime={tv_sec=1713349956, tv_nsec=180999970} /* 2024-04-17T06:32:36.180999970-0400 */, stx_btime={tv_sec=0, tv_nsec=0}, stx_ctime={tv_sec=1713349956, tv_nsec=180999970} /* 2024-04-17T06:32:36.180999970-0400 */, stx_mtime={tv_sec=1713349956, tv_nsec=180999970} /* 2024-04-17T06:32:36.180999970-0400 */, stx_rdev_major=0, stx_rdev_minor=0, stx_dev_major=0, stx_dev_minor=25}) = 0
[pid 480980] read(9, "cpuset cpu io memory hugetlb pids rdma misc\n", 1023) = 44
[pid 480980] read(9, "", 979)           = 0
[pid 480980] close(9)                   = 0
[pid 480980] openat(AT_FDCWD, "/sys/fs/cgroup/cgroup.subtree_control", O_WRONLY|O_CREAT|O_TRUNC, 0700) = -1 EACCES (Permission denied)

...for some reason /sys/fs/cgroup/cgroup.controllers and /sys/fs/cgroup/cgroup.subtree_control is owned by nobody. Reading works since the files are world-readable, but the system calls to write to those files causes EACCES errors.

@rhatdan Any ideas what could be the issue?

UPDATE: Just found this discussion explaining why it's owned by nobody. Not sure if the failing file write matters.

rhatdan commented 4 months ago

You are using cgroups v1 not v2? Which will not allow non root users to write to it.

paulcalabro commented 4 months ago

Thanks for taking a look @rhatdan, I appreciate it!

My issue might be slightly different than @antonkoenig. I am using cgroups v2. The only other thing worth mentioning is we're also using AD accounts to run the containers.

host:
  arch: amd64
  buildahVersion: 1.31.3
  cgroupControllers:
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.7-1.el9_2.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.7, commit: 606c693de21bcbab87e31002e46663c5f2dc8a9b'
  cpuUtilization:
    idlePercent: 98.91
    systemPercent: 0.31
    userPercent: 0.78
  cpus: 4
  databaseBackend: boltdb
  distribution:
    distribution: '"rhel"'
    version: "9.3"
  eventLogger: journald
  freeLocks: 2047
  hostname: <REDACTED>
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 19000
      size: 1
    - container_id: 1
      host_id: 231072
      size: 65536
    uidmap:
    - container_id: 0
      host_id: <REDACTED>
      size: 1
    - container_id: 1
      host_id: 231072
      size: 65536
  kernel: 5.14.0-362.18.1.el9_3.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 1443966976
  memTotal: 8013557760
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.7.0-1.el9.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.7.0
    package: netavark-1.7.0-2.el9_3.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.7.0
  ociRuntime:
    name: crun
    package: crun-1.8.7-1.el9.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.8.7
      commit: 53a9996ce82d1ee818349bdcc64797a1fa0433c4
      rundir: /run/user/<REDACTED>/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  pasta:
    executable: ""
    package: ""
    version: ""
  remoteSocket:
    path: /run/user/<REDACTED>/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.1-1.el9.x86_64
    version: |-
      slirp4netns version 1.2.1
      commit: 09e31e92fa3d2a1d3ca261adaeb012c8d75a8194
      libslirp: 4.4.0
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.2
  swapFree: 2124140544
  swapTotal: 2147479552
  uptime: 634h 37m 51.00s (Approximately 26.42 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  configFile: /home/<REDACTED>/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/lib/containers/<REDACTED>/containers/storage
  graphRootAllocated: 26828865536
  graphRootUsed: 9496113152
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 1
  runRoot: /run/user/<REDACTED>/containers
  transientStore: false
  volumePath: /var/lib/containers/<REDACTED>/containers/storage/volumes
version:
  APIVersion: 4.6.1
  Built: 1705652564
  BuiltTime: Fri Jan 19 03:22:44 2024
  GitCommit: ""
  GoVersion: go1.20.12
  Os: linux
  OsArch: linux/amd64
  Version: 4.6.1
rhatdan commented 4 months ago

In rootless mode the "root" users is not mapped into user namespace so it will show up as UID=nobody from the point of view of the user namespace.

paulcalabro commented 4 months ago

Thanks, that makes sense. I have a conversation going with Giuseppe over in this thread:

https://github.com/containers/podman/discussions/16558#discussioncomment-9435939

The problem I'm trying to solve is after my team did the exact same upgrade as @antonkoenig above, we're getting a permission denied when mounting /sys/fs/cgroup.

mount("cgroup2", "/proc/self/fd/6", "cgroup2", MS_NOSUID|MS_NODEV|MS_NOEXEC|MS_REC|MS_RELATIME, NULL) = -1 EACCES (Permission denied)

From our testing, we can run rootless containers as local users, but not as AD users. This used to work.

giuseppe commented 3 months ago

let's continue the discussion and close the issue as it appears to be something specific to your configuration