Closed Gelbpunkt closed 2 weeks ago
please provide the full command you are running.
That error happens with --cgroup-parent=$PARENT
and $PARENT
is not a systemd slice (i.e. it doesn't have a .slice
suffix).
$ podman run --rm --cgroup-parent foo alpine true
Error: did not receive systemd slice as cgroup parent when using systemd to manage cgroups: invalid argument
please provide the full command you are running.
That error happens with
--cgroup-parent=$PARENT
and$PARENT
is not a systemd slice (i.e. it doesn't have a.slice
suffix).$ podman run --rm --cgroup-parent foo alpine true Error: did not receive systemd slice as cgroup parent when using systemd to manage cgroups: invalid argument
I can reproduce it with a simple Incorrect, see next commentpodman run --rm -it alpine:edge
please provide the full command you are running. That error happens with
--cgroup-parent=$PARENT
and$PARENT
is not a systemd slice (i.e. it doesn't have a.slice
suffix).$ podman run --rm --cgroup-parent foo alpine true Error: did not receive systemd slice as cgroup parent when using systemd to manage cgroups: invalid argument
I can reproduce it with a simple
podman run --rm -it alpine:edge
Actually, no. Only in one pod of mine it seems to occur. I had just rebooted after making this post so my last comment was from memory, but I was able to check again now since it started occuring again:
[glitch@syndra ~]$ podman run --rm -it alpine:edge ash
/ #
[glitch@syndra ~]$ podman run --rm -it --pod glitch alpine:edge ash
Error: did not receive systemd slice as cgroup parent when using systemd to manage cgroups: invalid argument
Edit: Creating a new pod now and then running a container with --pod test
works. Only this one pod does not work as expected.
[jens@syndra ~]$ sudo cat /etc/systemd/system/glitch-pod.service
[Unit]
Description=Create glitch pod
[Service]
Type=oneshot
User=glitch
Group=glitch
ExecStartPre=-/usr/bin/podman pod stop glitch
ExecStartPre=-/usr/bin/podman pod rm -f glitch
ExecStart=/usr/bin/podman pod create --name glitch -p 127.0.0.1:8080:80 -p 127.0.0.1:4443:443 -p 29418:29418
ExecStop=/usr/bin/podman pod rm -f glitch
ExecReload=/usr/bin/podman pod stop glitch
Restart=no
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
This is how the pod in question is created.
[jens@syndra ~]$ sudo cat /etc/systemd/system/glitch-pod.service [Unit] Description=Create glitch pod
[Service] Type=oneshot User=glitch Group=glitch ExecStartPre=-/usr/bin/podman pod stop glitch ExecStartPre=-/usr/bin/podman pod rm -f glitch ExecStart=/usr/bin/podman pod create --name glitch -p 127.0.0.1:8080:80 -p 127.0.0.1:4443:443 -p 29418:29418 ExecStop=/usr/bin/podman pod rm -f glitch ExecReload=/usr/bin/podman pod stop glitch Restart=no RemainAfterExit=yes
[Install] WantedBy=multi-user.target
if you run with User=
there is no systemd user session active for that user, so it ends up using the cgroupfs backend.
You can install the .service
file under $HOME/.config/systemd/user/glitch-pod.service
and avoid the User=
attribute
if you run with
User=
there is no systemd user session active for that user, so it ends up using the cgroupfs backend.
Ahh, thanks so much for this hint! I'll move to user units then :)
Issue Description
For a few months already (I really cannot remember when this started), I observe the following when starting rootless containers (in my case via systemd units, but I can reproduce it when running the commands manually):
This, however, does not happen for the first couple of systemd unit-provisioned containers. Usually, 90% of my containers (I think there are about 20?) come up fine, and then this starts happening. Rootful containers are not affected and adding
--cgroup-manager=cgroupfs
makes it "work", but I would still consider this a bug. This might be caused by systemd, but I have never really touched anything related to it on this server.Steps to reproduce the issue
Steps to reproduce the issue
--cgroup-manager=cgroupfs
Describe the results you received
The containers fail to start
Describe the results you expected
The containers should start fine
podman info output
Podman in a container
No
Privileged Or Rootless
Rootless
Upstream Latest Release
Yes
Additional environment details
This is a physical server running a Fedora 40 install that was upgraded from I think 36 (?) over the last few years. The machine is pretty much only used for running containers and otherwise a fairly stock Fedora install.
Additional information
I would be very happy to provide SSH access or any other means to help debug this issue to podman maintainers. You can email me e2e-encrypted with my GPG key at the email in my github profile or message me on Matrix, where I am
@gelbpunkt:matrix.org
.