Closed zackman0010 closed 1 year ago
it looks like this issue in systemd: https://bugzilla.redhat.com/show_bug.cgi?id=2133792
I did see that bug report, but I wasn't sure if it would be the same issue, as the bug was reported on Fedora and we use RHEL. Sorry, I'm not as familiar with core services like systemd when it comes to differences between distributions.
could you try if the reproducer in that bug also happens on your system?
NAME=test-$RANDOM.slice
echo Running $NAME
busctl --user call org.freedesktop.systemd1 /org/freedesktop/systemd1 org.freedesktop.systemd1.Manager StartTransientUnit 'ssa(sv)a(sa(sv))' "$NAME" "replace" 6 "Description" "s" "slice" "Wants" as 1 "user.slice" "DefaultDependencies" b false MemoryAccounting b true CPUAccounting b true BlockIOAccounting b true 0
On a working system I get:
Running test-20500.slice
o "/org/freedesktop/systemd1/job/44528"
while on a system that has the issue:
Running test-31900.slice
Call failed: Unit test-31900.slice already exists.
Also in the podman info
output I see you are using crun, while the error message is coming from runc. Are you running podman info
and running the container as two different users?
@giuseppe Running the reproducer does NOT result in the bug for me.
csapp@sl-sispacew1t:~ > NAME=test-$RANDOM.slice
csapp@sl-sispacew1t:~ > echo Running $NAME
Running test-17747.slice
csapp@sl-sispacew1t:~ > busctl --user call org.freedesktop.systemd1 /org/freedesktop/systemd1 org.freedesktop.systemd1.Manager StartTransientUnit 'ssa(sv)a(sa(sv))' "$NAME" "replace" 6 "Description" "s" "slice" "Wants" as 1 "user.slice" "DefaultDependencies" b false MemoryAccounting b true CPUAccounting b true BlockIOAccounting b true 0
o "/org/freedesktop/systemd1/job/14404"
As for crun vs runc, I believe I copied the error message from a container that was created using runc (and so continued using runc after I changed my settings), while podman info
was run later after I had already tried changing to crun. I still get failures when running with crun, just with a much shorter error message.
csapp@sl-sispacew1t:~ > podman --log-level=debug run hello-world
INFO[0000] podman filtering at log level debug
DEBU[0000] Called run.PersistentPreRunE(podman --log-level=debug run hello-world)
.....
DEBU[0000] running conmon: /usr/bin/conmon args="[--api-version 1 -c a19b1763604e8e4593692e425cf00151ec989537f472b520c4d5df80d290e1b9 -u a19b1763604e8e4593692e425cf00151ec989537f472b520c4d5df80d290e1b9 -r /usr/bin/crun -b /usr/local/csapps/podman/storage/overlay-containers/a19b1763604e8e4593692e425cf00151ec989537f472b520c4d5df80d290e1b9/userdata -p /run/user/213/storage/overlay-containers/a19b1763604e8e4593692e425cf00151ec989537f472b520c4d5df80d290e1b9/userdata/pidfile -n sad_ramanujan --exit-dir /run/user/213/libpod/tmp/exits --full-attach -s -l k8s-file:/usr/local/csapps/podman/storage/overlay-containers/a19b1763604e8e4593692e425cf00151ec989537f472b520c4d5df80d290e1b9/userdata/ctr.log --log-level debug --syslog --log-size-max 10485760 --conmon-pidfile /run/user/213/storage/overlay-containers/a19b1763604e8e4593692e425cf00151ec989537f472b520c4d5df80d290e1b9/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /usr/local/csapps/podman/storage --exit-command-arg --runroot --exit-command-arg /run/user/213/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/213/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /usr/local/csapps/podman/storage/volumes --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg a19b1763604e8e4593692e425cf00151ec989537f472b520c4d5df80d290e1b9]"
INFO[0000] Running conmon under slice user.slice and unitName libpod-conmon-a19b1763604e8e4593692e425cf00151ec989537f472b520c4d5df80d290e1b9.scope
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied
DEBU[0000] Received: -1
DEBU[0000] Cleaning up container a19b1763604e8e4593692e425cf00151ec989537f472b520c4d5df80d290e1b9
DEBU[0000] Tearing down network namespace at /run/user/213/netns/netns-c0aeb138-f6f6-4203-bbdc-41931e584fd6 for container a19b1763604e8e4593692e425cf00151ec989537f472b520c4d5df80d290e1b9
DEBU[0000] Unmounted container "a19b1763604e8e4593692e425cf00151ec989537f472b520c4d5df80d290e1b9"
DEBU[0000] ExitCode msg: "crun: error creating systemd unit `libpod-a19b1763604e8e4593692e425cf00151ec989537f472b520c4d5df80d290e1b9.scope`: got `failed`: oci runtime error"
Error: OCI runtime error: crun: error creating systemd unit `libpod-a19b1763604e8e4593692e425cf00151ec989537f472b520c4d5df80d290e1b9.scope`: got `failed`
A friendly reminder that this issue had no activity for 30 days.
@giuseppe any followup?
since it is systemd failing, is there anything useful in the user journal journalctl --user -e
?
Today is the day I learn that users with a UID < 1000 don't get their own journal entries, they all go to the system journal instead, which our user does not have permissions to read. I wonder if that could potentially be relevant to the issues I'm having. I'll see if our system admins will give us access to the journal and will report back.
I now have access to the journal. However, I did notice that we've patched our systems since I last tested this. Our Podman version has gone up to 4.2.0 (I apologize this still isn't the latest version, we're using the latest version available in the RHEL8 repositories) and I no longer see the log Failed to add conmon to systemd sandbox cgroup: Unit libpod-conmon-*****.scope already exists
. However, using systemd still fails. Attached are the debug logs and journal logs for my latest attempt to run hello-world.
Console logs:
DEBU[0000] Setting Cgroups for container 770079ce47d2926f7d657b1650ea7108d79cdac15783392feb355e2c2bad612d to user.slice:libpod:770079ce47d2926f7d657b1650ea7108d79cdac15783392feb355e2c2bad612d
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d
DEBU[0000] Workdir "/" resolved to host path "/usr/local/csapps/podman/storage/overlay/7c837343a791def600158365a5f83c4a500ce90d4c62ec7a8469b69a7b896ca2/merged"
DEBU[0000] Created OCI spec for container 770079ce47d2926f7d657b1650ea7108d79cdac15783392feb355e2c2bad612d at /usr/local/csapps/podman/storage/overlay-containers/770079ce47d2926f7d657b1650ea7108d79cdac15783392feb355e2c2bad612d/userdata/config.json
DEBU[0000] /usr/bin/conmon messages will be logged to syslog
DEBU[0000] running conmon: /usr/bin/conmon args="[--api-version 1 -c 770079ce47d2926f7d657b1650ea7108d79cdac15783392feb355e2c2bad612d -u 770079ce47d2926f7d657b1650ea7108d79cdac15783392feb355e2c2bad612d -r /usr/bin/runc -b /usr/local/csapps/podman/storage/overlay-containers/770079ce47d2926f7d657b1650ea7108d79cdac15783392feb355e2c2bad612d/userdata -p /run/user/213/storage/overlay-containers/770079ce47d2926f7d657b1650ea7108d79cdac15783392feb355e2c2bad612d/userdata/pidfile -n wizardly_lumiere --exit-dir /run/user/213/libpod/tmp/exits --full-attach -s -l journald --log-level debug --syslog --log-size-max 10485760 --conmon-pidfile /run/user/213/storage/overlay-containers/770079ce47d2926f7d657b1650ea7108d79cdac15783392feb355e2c2bad612d/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /usr/local/csapps/podman/storage --exit-command-arg --runroot --exit-command-arg /run/user/213/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/213/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /usr/local/csapps/podman/storage/volumes --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 770079ce47d2926f7d657b1650ea7108d79cdac15783392feb355e2c2bad612d]"
INFO[0000] Running conmon under slice user.slice and unitName libpod-conmon-770079ce47d2926f7d657b1650ea7108d79cdac15783392feb355e2c2bad612d.scope
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied
DEBU[0000] Received: -1
DEBU[0000] Cleaning up container 770079ce47d2926f7d657b1650ea7108d79cdac15783392feb355e2c2bad612d
DEBU[0000] Tearing down network namespace at /run/user/213/netns/netns-99361b75-8d41-c8fb-1423-b4a1b1387aea for container 770079ce47d2926f7d657b1650ea7108d79cdac15783392feb355e2c2bad612d
DEBU[0000] Unmounted container "770079ce47d2926f7d657b1650ea7108d79cdac15783392feb355e2c2bad612d"
DEBU[0000] ExitCode msg: "runc: time=\"2023-02-08t15:10:31-06:00\" level=error msg=\"runc create failed: unable to start container process: unable to apply cgroup configuration: unable to start unit \\\"libpod-770079ce47d2926f7d657b1650ea7108d79cdac15783392feb355e2c2bad612d.scope\\\" (properties [{name:description value:\\\"libcontainer container 770079ce47d2926f7d657b1650ea7108d79cdac15783392feb355e2c2bad612d\\\"} {name:slice value:\\\"user.slice\\\"} {name:delegate value:true} {name:pids value:@au [305554]} {name:memoryaccounting value:true} {name:cpuaccounting value:true} {name:ioaccounting value:true} {name:tasksaccounting value:true} {name:defaultdependencies value:false}]): error creating systemd unit `libpod-770079ce47d2926f7d657b1650ea7108d79cdac15783392feb355e2c2bad612d.scope`: got `failed`\": oci runtime error"
Error: OCI runtime error: runc: time="2023-02-08T15:10:31-06:00" level=error msg="runc create failed: unable to start container process: unable to apply cgroup configuration: unable to start unit \"libpod-770079ce47d2926f7d657b1650ea7108d79cdac15783392feb355e2c2bad612d.scope\" (properties [{Name:Description Value:\"libcontainer container 770079ce47d2926f7d657b1650ea7108d79cdac15783392feb355e2c2bad612d\"} {Name:Slice Value:\"user.slice\"} {Name:Delegate Value:true} {Name:PIDs Value:@au [305554]} {Name:MemoryAccounting Value:true} {Name:CPUAccounting Value:true} {Name:IOAccounting Value:true} {Name:TasksAccounting Value:true} {Name:DefaultDependencies Value:false}]): error creating systemd unit `libpod-770079ce47d2926f7d657b1650ea7108d79cdac15783392feb355e2c2bad612d.scope`: got `failed`"
Journal logs:
Feb 08 15:16:38 sl-sispacew1t.corp.wan dbus-daemon[1298]: [system] Rejected send message, 4 matched rules; type="method_call", sender=":1.21" (uid=213 pid=1577 comm="/usr/lib/systemd/systemd --user ") interface="org.freedesktop.systemd1.Manage
r" member="AttachProcessesToUnit" error name="(unset)" requested_reply="0" destination="org.freedesktop.systemd1" (uid=0 pid=1 comm="/usr/lib/systemd/systemd --switched-root --system ")
Feb 08 15:16:38 sl-sispacew1t.corp.wan systemd[1577]: podman-305690.scope: Failed to add PIDs to scope's control group: Permission denied
Feb 08 15:16:38 sl-sispacew1t.corp.wan systemd[1577]: podman-305690.scope: Failed with result 'resources'.
Feb 08 15:16:38 sl-sispacew1t.corp.wan systemd[1577]: Failed to start podman-305690.scope.
Feb 08 15:16:39 sl-sispacew1t.corp.wan podman[305690]:
Feb 08 15:16:39 sl-sispacew1t.corp.wan podman[305690]: 2023-02-08 15:16:39.056983504 -0600 CST m=+0.186603786 container create 87d0ba20413f5d8e3afa5895815599dc6fe7236f9d7a77ddf6f3263511329186 (image=quay.io/podman/hello:latest, name=mystifying_swartz, health_status=, org.opencontainers.image.source=https://github.com/containers/podman.git, artist=Máirín Ní Ḋuḃṫaiġ, Twitter:@mairin, io.buildah.version=1.27.2, maintainer=Podman Maintainers, org.opencontainers.image.created=2022-12-27T18:19:51+00:00, org.opencontainers.image.revision=3fbf62e968d2904d70d174e943cb942e5fa35da3)
Feb 08 15:16:39 sl-sispacew1t.corp.wan conmon[305705]: conmon 87d0ba20413f5d8e3afa <ndebug>: failed to write to /proc/self/oom_score_adj: Permission denied
Feb 08 15:16:39 sl-sispacew1t.corp.wan dbus-daemon[1298]: [system] Rejected send message, 4 matched rules; type="method_call", sender=":1.21" (uid=213 pid=1577 comm="/usr/lib/systemd/systemd --user ") interface="org.freedesktop.systemd1.Manager" member="AttachProcessesToUnit" error name="(unset)" requested_reply="0" destination="org.freedesktop.systemd1" (uid=0 pid=1 comm="/usr/lib/systemd/systemd --switched-root --system ")
Feb 08 15:16:39 sl-sispacew1t.corp.wan systemd[1577]: libpod-conmon-87d0ba20413f5d8e3afa5895815599dc6fe7236f9d7a77ddf6f3263511329186.scope: Failed to add PIDs to scope's control group: Permission denied
Feb 08 15:16:39 sl-sispacew1t.corp.wan systemd[1577]: libpod-conmon-87d0ba20413f5d8e3afa5895815599dc6fe7236f9d7a77ddf6f3263511329186.scope: Failed with result 'resources'.
Feb 08 15:16:39 sl-sispacew1t.corp.wan systemd[1577]: Failed to start libpod-conmon-87d0ba20413f5d8e3afa5895815599dc6fe7236f9d7a77ddf6f3263511329186.scope.
Feb 08 15:16:39 sl-sispacew1t.corp.wan conmon[305707]: conmon 87d0ba20413f5d8e3afa <ndebug>: addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/13/attach}
Feb 08 15:16:39 sl-sispacew1t.corp.wan conmon[305707]: conmon 87d0ba20413f5d8e3afa <ndebug>: terminal_ctrl_fd: 13
Feb 08 15:16:39 sl-sispacew1t.corp.wan conmon[305707]: conmon 87d0ba20413f5d8e3afa <ndebug>: winsz read side: 16, winsz write side: 16
Feb 08 15:16:39 sl-sispacew1t.corp.wan podman[305690]: 2023-02-08 15:16:38.941612178 -0600 CST m=+0.071232448 image pull hello-world
Feb 08 15:16:39 sl-sispacew1t.corp.wan dbus-daemon[1298]: [system] Rejected send message, 4 matched rules; type="method_call", sender=":1.21" (uid=213 pid=1577 comm="/usr/lib/systemd/systemd --user ") interface="org.freedesktop.systemd1.Manager" member="AttachProcessesToUnit" error name="(unset)" requested_reply="0" destination="org.freedesktop.systemd1" (uid=0 pid=1 comm="/usr/lib/systemd/systemd --switched-root --system ")
Feb 08 15:16:39 sl-sispacew1t.corp.wan systemd[1577]: libpod-87d0ba20413f5d8e3afa5895815599dc6fe7236f9d7a77ddf6f3263511329186.scope: Failed to add PIDs to scope's control group: Permission denied
Feb 08 15:16:39 sl-sispacew1t.corp.wan systemd[1577]: libpod-87d0ba20413f5d8e3afa5895815599dc6fe7236f9d7a77ddf6f3263511329186.scope: Failed with result 'resources'.
Feb 08 15:16:39 sl-sispacew1t.corp.wan systemd[1577]: Failed to start libcontainer container 87d0ba20413f5d8e3afa5895815599dc6fe7236f9d7a77ddf6f3263511329186.
Feb 08 15:16:39 sl-sispacew1t.corp.wan conmon[305707]: conmon 87d0ba20413f5d8e3afa <nwarn>: runtime stderr: time="2023-02-08T15:16:39-06:00" level=error msg="runc create failed: unable to start container process: unable to apply cgroup configuration: unable to start unit \"libpod-87d0ba20413f5d8e3afa5895815599dc6fe7236f9d7a77ddf6f3263511329186.scope\" (properties [{Name:Description Value:\"libcontainer container 87d0ba20413f5d8e3afa5895815599dc6fe7236f9d7a77ddf6f3263511329186\"} {Name:Slice Value:\"user.slice\"} {Name:Delegate Value:true} {Name:PIDs Value:@au [305719]} {Name:MemoryAccounting Value:true} {Name:CPUAccounting Value:true} {Name:IOAccounting Value:true} {Name:TasksAccounting Value:true} {Name:DefaultDependencies Value:false}]): error creating systemd unit `libpod-87d0ba20413f5d8e3afa5895815599dc6fe7236f9d7a77ddf6f3263511329186.scope`: got `failed`"
Feb 08 15:16:39 sl-sispacew1t.corp.wan conmon[305707]: conmon 87d0ba20413f5d8e3afa <error>: Failed to create container: exit status 1
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="Called cleanup.PersistentPreRunE(/usr/bin/podman --root /usr/local/csapps/podman/storage --runroot /run/user/213/storage --log-level debug --cgroup-manager systemd --tmpdir /run/user/213/libpod/tmp --network-config-dir --network-backend netavark --volumepath /usr/local/csapps/podman/storage/volumes --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend journald --syslog container cleanup 87d0ba20413f5d8e3afa5895815599dc6fe7236f9d7a77ddf6f3263511329186)"
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="Merged system config \"/usr/share/containers/containers.conf\""
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="Merged system config \"/home/csapp/.config/containers/containers.conf\""
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="Using conmon: \"/usr/bin/conmon\""
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="Initializing boltdb state at /usr/local/csapps/podman/storage/libpod/bolt_state.db"
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="systemd-logind: Unknown object '/'."
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="Using graph driver overlay"
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="Using graph root /usr/local/csapps/podman/storage"
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="Using run root /run/user/213/storage"
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="Using static dir /usr/local/csapps/podman/storage/libpod"
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="Using tmp dir /run/user/213/libpod/tmp"
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="Using volume path /usr/local/csapps/podman/storage/volumes"
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="Set libpod namespace to \"\""
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="[graphdriver] trying provided driver \"overlay\""
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="Cached value indicated that overlay is supported"
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="Cached value indicated that overlay is supported"
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="Cached value indicated that metacopy is not being used"
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="Cached value indicated that native-diff is not being used"
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: failed to mount overlay: invalid argument"
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false"
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="Initializing event backend journald"
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument"
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument"
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument"
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument"
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\""
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=info msg="Setting parallel job count to 13"
Feb 08 15:16:39 sl-sispacew1t.corp.wan dbus-daemon[1298]: [system] Rejected send message, 4 matched rules; type="method_call", sender=":1.21" (uid=213 pid=1577 comm="/usr/lib/systemd/systemd --user ") interface="org.freedesktop.systemd1.Manager" member="AttachProcessesToUnit" error name="(unset)" requested_reply="0" destination="org.freedesktop.systemd1" (uid=0 pid=1 comm="/usr/lib/systemd/systemd --switched-root --system ")
Feb 08 15:16:39 sl-sispacew1t.corp.wan systemd[1577]: podman-305720.scope: Failed to add PIDs to scope's control group: Permission denied
Feb 08 15:16:39 sl-sispacew1t.corp.wan systemd[1577]: podman-305720.scope: Failed with result 'resources'.
Feb 08 15:16:39 sl-sispacew1t.corp.wan systemd[1577]: Failed to start podman-305720.scope.
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="Cleaning up container 87d0ba20413f5d8e3afa5895815599dc6fe7236f9d7a77ddf6f3263511329186"
Feb 08 15:16:39 sl-sispacew1t.corp.wan podman[305720]: 2023-02-08 15:16:39.270279617 -0600 CST m=+0.060625770 container cleanup 87d0ba20413f5d8e3afa5895815599dc6fe7236f9d7a77ddf6f3263511329186 (image=quay.io/podman/hello:latest, name=mystifying_swartz, health_status=, org.opencontainers.image.revision=3fbf62e968d2904d70d174e943cb942e5fa35da3, org.opencontainers.image.source=https://github.com/containers/podman.git, artist=Máirín Ní Ḋuḃṫaiġ, Twitter:@mairin, io.buildah.version=1.27.2, maintainer=Podman Maintainers, org.opencontainers.image.created=2022-12-27T18:19:51+00:00)
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="Network is already cleaned up, skipping..."
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="Container 87d0ba20413f5d8e3afa5895815599dc6fe7236f9d7a77ddf6f3263511329186 storage is already unmounted, skipping..."
Feb 08 15:16:39 sl-sispacew1t.corp.wan /usr/bin/podman[305720]: time="2023-02-08T15:16:39-06:00" level=debug msg="Called cleanup.PersistentPostRunE(/usr/bin/podman --root /usr/local/csapps/podman/storage --runroot /run/user/213/storage --log-level debug --cgroup-manager systemd --tmpdir /run/user/213/libpod/tmp --network-config-dir --network-backend netavark --volumepath /usr/local/csapps/podman/storage/volumes --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend journald --syslog container cleanup 87d0ba20413f5d8e3afa5895815599dc6fe7236f9d7a77ddf6f3263511329186)"
podman-305720.scope: Failed to add PIDs to scope's control group: Permission denied
this looks like a systemd configuration issue
That was my suspicion when I saw that line. Oddly enough, I still get those errors in the journal even though the container launches successfully using cgroupfs
.
Do you have any suggestions as to how I can try to figure out what's going wrong here?
could you try something like systemd-run --scope --user echo hello
? Does that work?
Can you temporarily disable SELinux and attempt running again the container?
SELinux is already disabled due to a previous issue involving the /usr folder (which will be fixed when Podman 4.3 reaches RHEL).
Using systemd-run --user --scope
results in the same error. It appears to match the behavior described in https://github.com/systemd/systemd/issues/3388, but that issue was fixed with systemd 238 and our server has 239.
I guess at this point it's not a Podman bug, so this issue can probably be closed. I'll reach out to the systemd repo for more support.
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
After updating to use cgroups v2, I am no longer able to start rootless containers. I've shared the error below. It looks similar to the error described in https://github.com/containers/podman/issues/16107, but I'm on RHEL instead of Fedora and an older version of Podman than that ticket said the issue was introduced in. Unfortunately I'm not able to update Podman at the moment, as I do not have root access to this system. If you need any more information, let me know and I'll do the best I can to provide it.
Steps to reproduce the issue:
podman --log-level=debug start space_minio
Describe the results you received:
Describe the results you expected:
A successfully running pod
Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info
:Package info (e.g. output of
rpm -q podman
orapt list podman
orbrew info podman
):Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?
I have not tested with the latest version of Podman, as I do not have access to update it on this system.