containers / crun

A fast and lightweight fully featured OCI runtime and C library for running containers
GNU General Public License v2.0
2.99k stars 303 forks source link

Cannot run any container: OCI runtime error: writing file `/sys/fs/cgroup/cgroup.subtree_control`: Invalid argument #704

Closed X-dark closed 3 years ago

X-dark commented 3 years ago

ArchLinux recently switched the runtime for Podman from runc to crun. With the switch to crun, I cannot create any container. I get the following error:

Error: OCI runtime error: writing file `/sys/fs/cgroup/cgroup.subtree_control`: Invalid argument

Podman info

host:
  arch: amd64
  buildahVersion: 1.21.0
  cgroupControllers:
  - memory
  - hugetlb
  - pids
  - rdma
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: /usr/bin/conmon is owned by conmon 1:2.0.29-1
    path: /usr/bin/conmon
    version: 'conmon version 2.0.29, commit: 7e6de6678f6ed8a18661e1d5721b81ccee293b9b'
  cpus: 4
  distribution:
    distribution: arch
    version: unknown
  eventLogger: journald
  hostname: lorien
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 5.12.15-arch1-1
  linkmode: dynamic
  memFree: 274452480
  memTotal: 12383404032
  ociRuntime:
    name: crun
    package: /usr/bin/crun is owned by crun 0.20.1-2
    path: /usr/bin/crun
    version: |-
      crun version 0.20.1
      commit: 38271d1c8d9641a2cdc70acfa3dcb6996d124b3d
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  remoteSocket:
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /etc/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 0
  swapTotal: 0
  uptime: 24h 49m 0.9s (Approximately 1.00 days)
registries: {}
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 5
    paused: 0
    running: 0
    stopped: 5
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageStore:
    number: 35
  runRoot: /var/run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 3.2.2
  Built: 1625835244
  BuiltTime: Fri Jul  9 14:54:04 2021
  GitCommit: d577c44e359f9f8284b38cf984f939b3020badc3
  GoVersion: go1.16.5
  OsArch: linux/amd64
  Version: 3.2.2

/sys/fs/cgroup content

ls -l /sys/fs/cgroup/
total 0
-r--r--r--  1 root root 0 Jul 21 18:44 cgroup.controllers
-rw-r--r--  1 root root 0 Jul 21 19:21 cgroup.max.depth
-rw-r--r--  1 root root 0 Jul 21 19:21 cgroup.max.descendants
-rw-r--r--  1 root root 0 Jul 21 19:21 cgroup.procs
-r--r--r--  1 root root 0 Jul 21 19:21 cgroup.stat
-rw-r--r--  1 root root 0 Jul 21 19:16 cgroup.subtree_control
-rw-r--r--  1 root root 0 Jul 21 19:21 cgroup.threads
-rw-r--r--  1 root root 0 Jul 21 19:21 cpu.pressure
-r--r--r--  1 root root 0 Jul 21 19:21 cpuset.cpus.effective
-r--r--r--  1 root root 0 Jul 21 19:21 cpuset.mems.effective
-r--r--r--  1 root root 0 Jul 21 19:21 cpu.stat
drwxr-xr-x  2 root root 0 Jul 21 19:16 dev-hugepages.mount
drwxr-xr-x  2 root root 0 Jul 21 19:16 dev-mqueue.mount
drwxr-xr-x  2 root root 0 Jul 21 18:25 init.scope
-rw-r--r--  1 root root 0 Jul 21 19:21 io.cost.model
-rw-r--r--  1 root root 0 Jul 21 19:21 io.cost.qos
-rw-r--r--  1 root root 0 Jul 21 19:21 io.pressure
-r--r--r--  1 root root 0 Jul 21 19:21 io.stat
drwxr-xr-x  2 root root 0 Jul 21 19:16 machine.slice
-r--r--r--  1 root root 0 Jul 21 19:21 memory.numa_stat
-rw-r--r--  1 root root 0 Jul 21 19:21 memory.pressure
-r--r--r--  1 root root 0 Jul 21 19:21 memory.stat
drwxr-xr-x  2 root root 0 Jul 21 19:16 proc-fs-nfsd.mount
drwxr-xr-x  2 root root 0 Jul 21 19:16 proc-sys-fs-binfmt_misc.mount
drwxr-xr-x  2 root root 0 Jul 21 19:16 sys-fs-fuse-connections.mount
drwxr-xr-x  2 root root 0 Jul 21 19:16 sys-kernel-config.mount
drwxr-xr-x  2 root root 0 Jul 21 19:16 sys-kernel-debug.mount
drwxr-xr-x  2 root root 0 Jul 21 19:16 sys-kernel-tracing.mount
drwxr-xr-x 66 root root 0 Jul 21 19:16 system.slice
drwxr-xr-x  4 root root 0 Jul 21 19:16 user.slice

trace run of podman

Click to expand ``` podman --log-level trace run docker.io/alpine:latest INFO[0000] podman filtering at log level trace DEBU[0000] Called run.PersistentPreRunE(podman --log-level trace run docker.io/alpine:latest) TRAC[0000] Reading configuration file "/usr/share/containers/containers.conf" DEBU[0000] Merged system config "/usr/share/containers/containers.conf" TRAC[0000] &{Containers:{Devices:[] Volumes:[] ApparmorProfile:containers-default-0.38.11 Annotations:[] CgroupNS:private Cgroups:enabled DefaultCapabilities:[CHOWN DAC_OVERRIDE FOWNER FSETID KILL NET_BIND_SERVICE SETFCAP SETGID SETPCAP SETUID SYS_CHROOT] DefaultSysctls:[net.ipv4.ping_group_range=0 0] DefaultUlimits:[nproc=4194304:4194304] DefaultMountsFile: DNSServers:[] DNSOptions:[] DNSSearches:[] EnableKeyring:true EnableLabeling:false Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] EnvHost:false HTTPProxy:true Init:false InitPath: IPCNS:private LogDriver:k8s-file LogSizeMax:-1 NetNS:bridge NoHosts:false PidsLimit:2048 PidNS:private SeccompProfile: ShmSize:65536k TZ: Umask:0022 UTSNS:private UserNS:host UserNSSize:65536} Engine:{CgroupCheck:false CgroupManager:systemd ConmonEnvVars:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] ConmonPath:[/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] DetachKeys:ctrl-p,ctrl-q EnablePortReservation:true Env:[] EventsLogFilePath:/run/libpod/events/events.log EventsLogger:journald HooksDir:[/usr/share/containers/oci/hooks.d] ImageBuildFormat:oci ImageDefaultTransport:docker:// ImageParallelCopies:0 ImageDefaultFormat: InfraCommand: InfraImage:k8s.gcr.io/pause:3.5 InitPath:/usr/libexec/podman/catatonit LockType:shm MachineEnabled:false MultiImageArchive:false Namespace: NetworkCmdPath: NetworkCmdOptions:[] NoPivotRoot:false NumLocks:2048 OCIRuntime:crun OCIRuntimes:map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc] runsc:[/usr/bin/runsc /usr/sbin/runsc /usr/local/bin/runsc /usr/local/sbin/runsc /bin/runsc /sbin/runsc /run/current-system/sw/bin/runsc]] PullPolicy:missing Remote:false RemoteURI: RemoteIdentity: ActiveService: ServiceDestinations:map[] RuntimePath:[] RuntimeSupportsJSON:[crun runc kata runsc] RuntimeSupportsNoCgroups:[crun] RuntimeSupportsKVM:[kata kata-runtime kata-qemu kata-fc] SetOptions:{StorageConfigRunRootSet:false StorageConfigGraphRootSet:false StorageConfigGraphDriverNameSet:false StaticDirSet:false VolumePathSet:false TmpDirSet:false} SignaturePolicyPath:/etc/containers/policy.json SDNotify:false StateType:3 StaticDir:/var/lib/containers/storage/libpod StopTimeout:10 TmpDir:/run/libpod VolumePath:/var/lib/containers/storage/volumes VolumePlugins:map[]} Network:{CNIPluginDirs:[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] DefaultNetwork:podman DefaultSubnet:10.88.0.0/16 NetworkConfigDir:/etc/cni/net.d/}} TRAC[0000] Reading configuration file "/etc/containers/containers.conf" DEBU[0000] Merged system config "/etc/containers/containers.conf" TRAC[0000] &{Containers:{Devices:[] Volumes:[] ApparmorProfile:containers-default-0.38.11 Annotations:[] CgroupNS:private Cgroups:enabled DefaultCapabilities:[CHOWN DAC_OVERRIDE FOWNER FSETID KILL NET_BIND_SERVICE SETFCAP SETGID SETPCAP SETUID SYS_CHROOT] DefaultSysctls:[net.ipv4.ping_group_range=0 0] DefaultUlimits:[nproc=4194304:4194304] DefaultMountsFile: DNSServers:[] DNSOptions:[] DNSSearches:[] EnableKeyring:true EnableLabeling:false Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] EnvHost:false HTTPProxy:true Init:false InitPath: IPCNS:private LogDriver:k8s-file LogSizeMax:-1 NetNS:bridge NoHosts:false PidsLimit:2048 PidNS:private SeccompProfile: ShmSize:65536k TZ: Umask:0022 UTSNS:private UserNS:host UserNSSize:65536} Engine:{CgroupCheck:false CgroupManager:systemd ConmonEnvVars:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] ConmonPath:[/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] DetachKeys:ctrl-p,ctrl-q EnablePortReservation:true Env:[] EventsLogFilePath:/run/libpod/events/events.log EventsLogger:journald HooksDir:[/usr/share/containers/oci/hooks.d] ImageBuildFormat:oci ImageDefaultTransport:docker:// ImageParallelCopies:0 ImageDefaultFormat: InfraCommand: InfraImage:k8s.gcr.io/pause:3.5 InitPath:/usr/libexec/podman/catatonit LockType:shm MachineEnabled:false MultiImageArchive:false Namespace: NetworkCmdPath: NetworkCmdOptions:[] NoPivotRoot:false NumLocks:2048 OCIRuntime:crun OCIRuntimes:map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc] runsc:[/usr/bin/runsc /usr/sbin/runsc /usr/local/bin/runsc /usr/local/sbin/runsc /bin/runsc /sbin/runsc /run/current-system/sw/bin/runsc]] PullPolicy:missing Remote:false RemoteURI: RemoteIdentity: ActiveService: ServiceDestinations:map[] RuntimePath:[] RuntimeSupportsJSON:[crun runc kata runsc] RuntimeSupportsNoCgroups:[crun] RuntimeSupportsKVM:[kata kata-runtime kata-qemu kata-fc] SetOptions:{StorageConfigRunRootSet:false StorageConfigGraphRootSet:false StorageConfigGraphDriverNameSet:false StaticDirSet:false VolumePathSet:false TmpDirSet:false} SignaturePolicyPath:/etc/containers/policy.json SDNotify:false StateType:3 StaticDir:/var/lib/containers/storage/libpod StopTimeout:10 TmpDir:/run/libpod VolumePath:/var/lib/containers/storage/volumes VolumePlugins:map[]} Network:{CNIPluginDirs:[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] DefaultNetwork:podman DefaultSubnet:10.88.0.0/16 NetworkConfigDir:/etc/cni/net.d/}} DEBU[0000] Using conmon: "/usr/bin/conmon" DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db DEBU[0000] Overriding run root "/run/containers/storage" with "/var/run/containers/storage" from database DEBU[0000] Overriding tmp dir "/run/libpod" with "/var/run/libpod" from database DEBU[0000] Using graph driver overlay DEBU[0000] Using graph root /var/lib/containers/storage DEBU[0000] Using run root /var/run/containers/storage DEBU[0000] Using static dir /var/lib/containers/storage/libpod DEBU[0000] Using tmp dir /var/run/libpod DEBU[0000] Using volume path /var/lib/containers/storage/volumes DEBU[0000] Set libpod namespace to "" DEBU[0000] [graphdriver] trying provided driver "overlay" DEBU[0000] cached value indicated that overlay is supported DEBU[0000] cached value indicated that metacopy is being used DEBU[0000] cached value indicated that native-diff is not being used INFO[0000] Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true DEBU[0000] Initializing event backend journald TRAC[0000] found runtime "" DEBU[0000] configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument DEBU[0000] configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument DEBU[0000] configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument DEBU[0000] Using OCI runtime "/usr/bin/crun" INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist DEBU[0000] Default CNI network name podman is unchangeable INFO[0000] Setting parallel job count to 13 DEBU[0000] Pulling image docker.io/alpine:latest (policy: missing) DEBU[0000] Looking up image "docker.io/alpine:latest" in local containers storage DEBU[0000] Trying "docker.io/alpine:latest" ... DEBU[0000] Trying "docker.io/library/alpine:latest" ... DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev]@d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] Found image "docker.io/alpine:latest" as "docker.io/library/alpine:latest" in local containers storage DEBU[0000] Looking up image "docker.io/library/alpine:latest" in local containers storage DEBU[0000] Trying "docker.io/library/alpine:latest" ... DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev]@d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] Found image "docker.io/library/alpine:latest" as "docker.io/library/alpine:latest" in local containers storage DEBU[0000] Looking up image "docker.io/alpine:latest" in local containers storage DEBU[0000] Trying "docker.io/alpine:latest" ... DEBU[0000] Trying "docker.io/library/alpine:latest" ... DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev]@d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] Found image "docker.io/alpine:latest" as "docker.io/library/alpine:latest" in local containers storage DEBU[0000] Inspecting image d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83 DEBU[0000] exporting opaque data as blob "sha256:d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] exporting opaque data as blob "sha256:d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] exporting opaque data as blob "sha256:d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] exporting opaque data as blob "sha256:d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] Looking up image "docker.io/alpine:latest" in local containers storage DEBU[0000] Trying "docker.io/alpine:latest" ... DEBU[0000] Trying "docker.io/library/alpine:latest" ... DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev]@d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] Found image "docker.io/alpine:latest" as "docker.io/library/alpine:latest" in local containers storage DEBU[0000] Inspecting image d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83 DEBU[0000] exporting opaque data as blob "sha256:d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] exporting opaque data as blob "sha256:d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] exporting opaque data as blob "sha256:d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] exporting opaque data as blob "sha256:d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] Inspecting image d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83 DEBU[0000] using systemd mode: false DEBU[0000] No hostname set; container's hostname will default to runtime default DEBU[0000] Loading seccomp profile from "/etc/containers/seccomp.json" DEBU[0000] Allocated lock 7 for container 99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev]@d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] exporting opaque data as blob "sha256:d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] created container "99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba" DEBU[0000] container "99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba" has work directory "/var/lib/containers/storage/overlay-containers/99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba/userdata" DEBU[0000] container "99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba" has run directory "/var/run/containers/storage/overlay-containers/99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba/userdata" DEBU[0000] Not attaching to stdin DEBU[0000] Made network namespace at /run/netns/cni-0f1200c2-897c-f4cb-72f8-46aef3438827 for container 99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba DEBU[0000] [graphdriver] trying provided driver "overlay" DEBU[0000] cached value indicated that overlay is supported DEBU[0000] cached value indicated that metacopy is being used DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true DEBU[0000] overlay: mount_data=nodev,lowerdir=/var/lib/containers/storage/overlay/l/7UAIWVTHEN62XANJRDPLV24XWD,upperdir=/var/lib/containers/storage/overlay/3a07f591d89574dba11360c63b29afa637fceb5ffa957449c3a430b325e6c973/diff,workdir=/var/lib/containers/storage/overlay/3a07f591d89574dba11360c63b29afa637fceb5ffa957449c3a430b325e6c973/work DEBU[0000] mounted container "99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba" at "/var/lib/containers/storage/overlay/3a07f591d89574dba11360c63b29afa637fceb5ffa957449c3a430b325e6c973/merged" DEBU[0000] Created root filesystem for container 99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba at /var/lib/containers/storage/overlay/3a07f591d89574dba11360c63b29afa637fceb5ffa957449c3a430b325e6c973/merged INFO[0000] Got pod network &{Name:stoic_poitras Namespace:stoic_poitras ID:99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba NetNS:/run/netns/cni-0f1200c2-897c-f4cb-72f8-46aef3438827 Networks:[{Name:podman Ifname:eth0}] RuntimeConfig:map[podman:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]} INFO[0000] Adding pod stoic_poitras_stoic_poitras to CNI network "podman" (type=bridge) DEBU[0000] [0] CNI result: &{0.4.0 [{Name:cni-podman0 Mac:06:82:16:a3:57:89 Sandbox:} {Name:vethaf474ad4 Mac:1e:4a:59:c4:23:85 Sandbox:} {Name:eth0 Mac:8e:29:43:b4:f5:a5 Sandbox:/run/netns/cni-0f1200c2-897c-f4cb-72f8-46aef3438827}] [{Version:4 Interface:0xc000376a18 Address:{IP:10.88.0.32 Mask:ffff0000} Gateway:10.88.0.1}] [{Dst:{IP:0.0.0.0 Mask:00000000} GW:}] {[] [] []}} DEBU[0000] Workdir "/" resolved to host path "/var/lib/containers/storage/overlay/3a07f591d89574dba11360c63b29afa637fceb5ffa957449c3a430b325e6c973/merged" DEBU[0000] skipping unrecognized mount in /etc/containers/mounts.conf: "# Configuration file for default mounts in containers (see man 5" DEBU[0000] skipping unrecognized mount in /etc/containers/mounts.conf: "# containers-mounts.conf for further information)" DEBU[0000] skipping unrecognized mount in /etc/containers/mounts.conf: "" DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode subscription DEBU[0000] Setting CGroups for container 99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba to machine.slice:libpod:99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d DEBU[0000] Created OCI spec for container 99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba at /var/lib/containers/storage/overlay-containers/99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba/userdata/config.json DEBU[0000] running conmon: /usr/bin/conmon args="[--api-version 1 -c 99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba -u 99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba -r /usr/bin/crun -b /var/lib/containers/storage/overlay-containers/99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba/userdata -p /var/run/containers/storage/overlay-containers/99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba/userdata/pidfile -n stoic_poitras --exit-dir /var/run/libpod/exits --full-attach -s -l k8s-file:/var/lib/containers/storage/overlay-containers/99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba/userdata/ctr.log --log-level trace --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/var/run/containers/storage/overlay-containers/99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba/userdata/oci-log --conmon-pidfile /var/run/containers/storage/overlay-containers/99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /var/run/containers/storage --exit-command-arg --log-level --exit-command-arg trace --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /var/run/libpod --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba]" INFO[0000] Running conmon under slice machine.slice and unitName libpod-conmon-99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba.scope DEBU[0000] Received: -1 DEBU[0000] Cleaning up container 99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba DEBU[0000] Tearing down network namespace at /run/netns/cni-0f1200c2-897c-f4cb-72f8-46aef3438827 for container 99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba INFO[0000] Got pod network &{Name:stoic_poitras Namespace:stoic_poitras ID:99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba NetNS:/run/netns/cni-0f1200c2-897c-f4cb-72f8-46aef3438827 Networks:[{Name:podman Ifname:eth0}] RuntimeConfig:map[podman:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]} INFO[0000] Deleting pod stoic_poitras_stoic_poitras from CNI network "podman" (type=bridge) DEBU[0001] unmounted container "99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba" DEBU[0001] ExitCode msg: "writing file `/sys/fs/cgroup/cgroup.subtree_control`: invalid argument: oci runtime error" Error: OCI runtime error: writing file `/sys/fs/cgroup/cgroup.subtree_control`: Invalid argument ```
flouthoc commented 3 years ago

@X-dark I am using the same configuration cgroupManager: systemd and cgroupVersion: v2 but I am unable to reproduce. Could you tell me values of following or make sure they are equivalent to:

flouthoc commented 3 years ago

@X-dark afaik following controllers should be enabled for rootless users cpu cpuset io memory pids to work with v2 but not sure please try adding these to /sys/fs/cgroup/cgroup.subtree_control and /sys/fs/cgroup/cgroup.controllers

using ( from priviledged or root user ) echo +cpu +cpuset +io +memory +pids > /sys/fs/cgroup/cgroup.subtree_control

X-dark commented 3 years ago

@X-dark I am using the same configuration cgroupManager: systemd and cgroupVersion: v2 but I am unable to reproduce. Could you tell me values of following or make sure they are equivalent to:

  • systemd.unified_cgroup_hierarchy=1 if you are using grub it should be defined in /etc/default/grub

I have not this option on my Grub. But I understood it should be the default and only needed with =0 to revert to cgroupv1. Any way to check?

  • cat /sys/fs/cgroup/cgroup.subtree_control
memory hugetlb pids rdma
  • What happens when you do echo +cpuset > /sys/fs/cgroup/cgroup.subtree_control and echo +cpuset > /sys/fs/cgroup/cgroup.controllers ?
  • or if cpuset is already there try echo +cpu > /sys/fs/cgroup/cgroup.subtree_control and echo +cpu > /sys/fs/cgroup/cgroup.controllers

Seems to be accepted. After that I have

cpuset cpu memory hugetlb pids rdma

The container is still not running though.

  • What happens if you run with root user ? using sudo

All of that was done as root or with sudo. However, I just tried rootless and with that it seems I have no issue (did not try before adding cpu/cpuset above though).

X-dark commented 3 years ago

Actually, launching a container as root makes the cpu, cpuset flags disappear after the error.

Rootless, works fine without the flags being there first.

flouthoc commented 3 years ago

@X-dark so adding cpu cpuset solves for you ?

X-dark commented 3 years ago

No, rootless does. Adding cpu cpuset makes no effect and get removed once I get the error.

root containers are still failing.

flouthoc commented 3 years ago

@X-dark Could you also please paste output for cat /proc/self/cgroupand cat /proc/cgroups

Foxboron commented 3 years ago

Downstream bugreport: https://bugs.archlinux.org/task/71560

I can't reproduce either so there is a user issue somewhere I believe.

X-dark commented 3 years ago
0::/user.slice/user-1000.slice/session-5.scope
#subsys_name    hierarchy       num_cgroups     enabled
cpuset  0       158     1
cpu     0       158     1
cpuacct 0       158     1
blkio   0       158     1
memory  0       158     1
devices 0       158     1
freezer 0       158     1
net_cls 0       158     1
perf_event      0       158     1
net_prio        0       158     1
hugetlb 0       158     1
pids    0       158     1
rdma    0       158     1

@Foxboron it could be, but did not remember of any customization I may have done.

flouthoc commented 3 years ago

@Foxboron thanks for confirming at your end.

@X-dark Just to keep things on page

X-dark commented 3 years ago

@flouthoc that is a good summary, yes

flouthoc commented 3 years ago

@X-dark does it works fine if you use sudo <non-root> ?

X-dark commented 3 years ago

sudo -u cedric podman run docker.io/alpine:latest ls works fine sudo podman run docker.io/alpine:latest ls fails

flouthoc commented 3 years ago

@X-dark try with this edit

Restart system after these three step and then check. I am not sure but problem could also be related with RT process as stated here https://github.com/lxc/lxc/issues/3545#issue-714101025

X-dark commented 3 years ago

I am not sure but problem could also be related with RT process as stated here lxc/lxc#3545 (comment)

Good catch. I stopped mpd which is running as realtime and it started to work as root (without any other change nor reboot).

flouthoc commented 3 years ago

@X-dark cheers!!! @giuseppe do you think we can document this into crun manuals ?

giuseppe commented 3 years ago

Yes, definitely it is something we can document

flouthoc commented 3 years ago

@Foxboron @X-dark Could you guys close this issue if it is resolved ? Also i think downstream can be closed https://bugs.archlinux.org/task/71560 . @X-dark feel free to raise a PR into crun manuals otherwise i'll raise it myself.

Foxboron commented 3 years ago

@flouthoc Yep, thanks for the help!

X-dark commented 3 years ago

PR opened. Closing this issue. Thanks @flouthoc

edsantiago commented 1 year ago

I just hit this on my home system after a reboot: could not run podman (4.3.1) as root, with the abovementioned error. Turns out, I had started mpd before running podman. It cost me a lot of time to track down this issue (and, in fact, killing mpd fixed the problem. I was able to start mpd later, after having run podman as root).

I think a friendly error message (podman, crun, whatever) would be much, much appreciated.

giridshar841 commented 1 week ago

I am running into similar issue when I reboot my system. Here are some details. But mpd is not even installed on the system.

podman start of the container fails after the system reboots. Not able to figure out why.

podman start <container> throws this
Error: OCI runtime error: unable to start container "15b6e875dc79d0bdc6976347a2c0e20c28ef58b4e07396434502f7224875a028": writing file \/sys/fs/cgroup/cgroup.subtree_control`: Invalid argument`

Observations: Before reboot if I look at cgroup.subtree_control file, the contents would be

cat /sys/fs/cgroup/cgroup.subtree_control
cpuset cpu io memory hugetlb pids rdma misc

After reboot, I see cpuset missing

cat /sys/fs/cgroup/cgroup.subtree_control
cpu io memory hugetlb pids rdma misc <<< notice cpuset gone missing.

When I try to write this, it fails

echo +cpuset > /sys/fs/cgroup/cgroup.subtree_control
-bash: echo: write error: Invalid argument

Also the mounts before and after Before:

mount | grep cgroup
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)

After:

mount | grep cgroup
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot)

I even included GRUB_CMDLINE_LINUX="systemd.unified_cgroup_hierarchy=1" in the grub and rebooted. But no luck. I am out of ideas at this point. Any help would be greatly appreciated.

giuseppe commented 1 week ago

this could be the reason: https://lore.kernel.org/io-uring/CA+wXwBQwgxB3_UphSny-yAP5b26meeOu1W4TwYVcD_+5gOhvPw@mail.gmail.com/

giridshar841 commented 1 week ago

this could be the reason: https://lore.kernel.org/io-uring/CA+wXwBQwgxB3_UphSny-yAP5b26meeOu1W4TwYVcD_+5gOhvPw@mail.gmail.com/

Did you get any reply on this ? what was the workaround ?

giuseppe commented 1 week ago

I have not reported it