Closed vwbusguy closed 2 months ago
can you please show the output of cat /proc/self/mountinfo
?
I think you are seeing a cgroup2 mount, but you are using the hybrid mount model (that is cgroupv2 mounted under a cgroupv1 hierarchy)
Because this is a k8s host (Rancher Elemental SLE Micro), there's a ton of overlay output there, so I did it with an cgroup grep:
grep cgroup /proc/self/mountinfo
30 24 0:26 / /sys/fs/cgroup ro,nosuid,nodev,noexec shared:26 - tmpfs tmpfs ro,size=4096k,nr_inodes=1024,mode=755,inode64
31 30 0:27 / /sys/fs/cgroup/unified rw,nosuid,nodev,noexec,relatime shared:27 - cgroup2 cgroup2 rw
32 30 0:28 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:28 - cgroup cgroup rw,xattr,name=systemd
36 30 0:32 / /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:29 - cgroup cgroup rw,pids
37 30 0:33 / /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:30 - cgroup cgroup rw,cpu,cpuacct
38 30 0:34 / /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:31 - cgroup cgroup rw,net_cls,net_prio
39 30 0:35 / /sys/fs/cgroup/rdma rw,nosuid,nodev,noexec,relatime shared:32 - cgroup cgroup rw,rdma
40 30 0:36 / /sys/fs/cgroup/misc rw,nosuid,nodev,noexec,relatime shared:33 - cgroup cgroup rw,misc
41 30 0:37 / /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:34 - cgroup cgroup rw,blkio
42 30 0:38 / /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:35 - cgroup cgroup rw,memory
43 30 0:39 / /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:36 - cgroup cgroup rw,cpuset
44 30 0:40 / /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:37 - cgroup cgroup rw,hugetlb
45 30 0:41 / /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:38 - cgroup cgroup rw,freezer
46 30 0:42 / /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:39 - cgroup cgroup rw,perf_event
47 30 0:43 / /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:40 - cgroup cgroup rw,devices
Does podman not support cgroupsv2 in a unified hierarchy? I thought it did.
Ah, so if cgroups2 is mounted in /unified, it's not actually a unified hierarchy but a hybrid one. That's a little confusing.
https://github.com/containers/podman/issues/4659#issuecomment-563378217
It seems I need to followup with SUSE support on the ramifications of switching the hierarchy on these nodes. The message could be more clear from podman's end though, as it is confusing to have cgroups2 enabled and still see this message because it's not enabled a specific way that podman supports.
as it is confusing to have cgroups2 enabled and still see this message because it's not enabled a specific way that podman supports.
it is not really a podman limitation, but more of a kernel+systemd thing. If a controller is enabled (like memory, or cpu) on cgroup v1 then it cannot be used on cgroup v2. IMO "hybrid mode" was good only to experiment with cgroup v2 but it is not really usable as it requires manual changes to make it work. You'd need to make sure controllers are configured for cgroup v2 and not cgroup v1 at startup, so for podman&crun we decided to not support it
Issue Description
I'm seeing this when running podman on Kubernetes rootless via privileged mode, but also able to replicate it directly with podman on the kubernetes host.
When trying to do some CI automations with podman, we get numerous log entries for:
Steps to reproduce the issue
Steps to reproduce the issue
Describe the results you received
Describe the results you expected
These warnings should not be given in environments that support cgroups-v2.
podman info output
Podman in a container
Yes
Privileged Or Rootless
Rootless
Upstream Latest Release
Yes
Additional environment details
For now, I've added the PODMAN_IGNORE_CGROUPSV1_WARNING environment variable in my CI, but this message seems to be constantly giving an unnecessary call to action.
I have also tried mounting /sys/fs/cgroup/ from the host as a much earlier GitHub issue commented suggested, but it did not have an effect.
Additional information
I would assume that a
grep cgroup2 /proc/filesystems
check before showing this message might be sufficient, assuming there aren't any other side effects, which I've yet to discover.