containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
23.27k stars 2.37k forks source link

device permission denied when starting from quadlet / systemd .container #20863

Closed goshansp closed 10 months ago

goshansp commented 10 months ago

Issue Description

A container started using podman run will behave differently than when it's started with seemingly same parameters via systemd/.container. Background: I am passing run.oci.keep_original_groups=1 and hence passing the dialout group to the containers. When the container ist started from systemd/.container it cannot access the shared device. If the container is started from podman run the device is accessible. Podman inspect seems to show no differences between the working and non-working scenario.

Diff Inspect

The parameters have been compared using podman inspect on both the working.txt podman run ... and the nok.txt started from systemd/.container. The resulting diff doesn't show an obvious difference to me:

$ diff working.txt nok.txt 
3,4c3,4
<           "Id": "83efdcf4abd8edce89379403204ccc6a043322e2157121e82c0b39ae52056f4d",
<           "Created": "2023-12-01T07:28:15.473867054Z",
---
>           "Id": "67f6fea950cc8ab183b6a9faa15ae54f65eea1673897f7ba8b4056830be06caa",
>           "Created": "2023-12-01T07:27:30.326413503Z",
20,21c20,21
<                "Pid": 170884,
<                "ConmonPid": 170882,
---
>                "Pid": 170791,
>                "ConmonPid": 170789,
24c24
<                "StartedAt": "2023-12-01T07:28:16.012209482Z",
---
>                "StartedAt": "2023-12-01T07:27:30.906631225Z",
31c31
<                "CgroupPath": "/user.slice/user-1001.slice/user@1001.service/user.slice/podman-170851.scope/libpod-payload-83efdcf4abd8edce89379403204ccc6a043322e2157121e82c0b39ae52056f4d",
---
>                "CgroupPath": "/user.slice/user-1001.slice/user@1001.service/app.slice/zigbee2mqtt.service/libpod-payload-67f6fea950cc8ab183b6a9faa15ae54f65eea1673897f7ba8b4056830be06caa",
40,44c40,44
<           "ResolvConfPath": "/tmp/containers-user-1001/containers/overlay-containers/83efdcf4abd8edce89379403204ccc6a043322e2157121e82c0b39ae52056f4d/userdata/resolv.conf",
<           "HostnamePath": "/tmp/containers-user-1001/containers/overlay-containers/83efdcf4abd8edce89379403204ccc6a043322e2157121e82c0b39ae52056f4d/userdata/hostname",
<           "HostsPath": "/tmp/containers-user-1001/containers/overlay-containers/83efdcf4abd8edce89379403204ccc6a043322e2157121e82c0b39ae52056f4d/userdata/hosts",
<           "StaticDir": "/var/home/minion/.local/share/containers/storage/overlay-containers/83efdcf4abd8edce89379403204ccc6a043322e2157121e82c0b39ae52056f4d/userdata",
<           "OCIConfigPath": "/var/home/minion/.local/share/containers/storage/overlay-containers/83efdcf4abd8edce89379403204ccc6a043322e2157121e82c0b39ae52056f4d/userdata/config.json",
---
>           "ResolvConfPath": "/tmp/containers-user-1001/containers/overlay-containers/67f6fea950cc8ab183b6a9faa15ae54f65eea1673897f7ba8b4056830be06caa/userdata/resolv.conf",
>           "HostnamePath": "/tmp/containers-user-1001/containers/overlay-containers/67f6fea950cc8ab183b6a9faa15ae54f65eea1673897f7ba8b4056830be06caa/userdata/hostname",
>           "HostsPath": "/tmp/containers-user-1001/containers/overlay-containers/67f6fea950cc8ab183b6a9faa15ae54f65eea1673897f7ba8b4056830be06caa/userdata/hosts",
>           "StaticDir": "/var/home/minion/.local/share/containers/storage/overlay-containers/67f6fea950cc8ab183b6a9faa15ae54f65eea1673897f7ba8b4056830be06caa/userdata",
>           "OCIConfigPath": "/var/home/minion/.local/share/containers/storage/overlay-containers/67f6fea950cc8ab183b6a9faa15ae54f65eea1673897f7ba8b4056830be06caa/userdata/config.json",
46,47c46,47
<           "ConmonPidFile": "/tmp/containers-user-1001/containers/overlay-containers/83efdcf4abd8edce89379403204ccc6a043322e2157121e82c0b39ae52056f4d/userdata/conmon.pid",
<           "PidFile": "/tmp/containers-user-1001/containers/overlay-containers/83efdcf4abd8edce89379403204ccc6a043322e2157121e82c0b39ae52056f4d/userdata/pidfile",
---
>           "ConmonPidFile": "/tmp/containers-user-1001/containers/overlay-containers/67f6fea950cc8ab183b6a9faa15ae54f65eea1673897f7ba8b4056830be06caa/userdata/conmon.pid",
>           "PidFile": "/tmp/containers-user-1001/containers/overlay-containers/67f6fea950cc8ab183b6a9faa15ae54f65eea1673897f7ba8b4056830be06caa/userdata/pidfile",
51,52c51,52
<           "MountLabel": "system_u:object_r:container_file_t:s0:c651,c685",
<           "ProcessLabel": "system_u:system_r:container_t:s0:c651,c685",
---
>           "MountLabel": "system_u:object_r:container_file_t:s0:c339,c873",
>           "ProcessLabel": "system_u:system_r:container_t:s0:c339,c873",
85,87c85,87
<                     "MergedDir": "/var/home/minion/.local/share/containers/storage/overlay/ee88ca16cb4a4e281a038e43ce28181cc7321c89747620a353bb9254e99db1e2/merged",
<                     "UpperDir": "/var/home/minion/.local/share/containers/storage/overlay/ee88ca16cb4a4e281a038e43ce28181cc7321c89747620a353bb9254e99db1e2/diff",
<                     "WorkDir": "/var/home/minion/.local/share/containers/storage/overlay/ee88ca16cb4a4e281a038e43ce28181cc7321c89747620a353bb9254e99db1e2/work"
---
>                     "MergedDir": "/var/home/minion/.local/share/containers/storage/overlay/a727071f4231fd78a74e8d5a792e0e7713774e9856bc6d9e3f612898128c2871/merged",
>                     "UpperDir": "/var/home/minion/.local/share/containers/storage/overlay/a727071f4231fd78a74e8d5a792e0e7713774e9856bc6d9e3f612898128c2871/diff",
>                     "WorkDir": "/var/home/minion/.local/share/containers/storage/overlay/a727071f4231fd78a74e8d5a792e0e7713774e9856bc6d9e3f612898128c2871/work"
144c144
<                "SandboxKey": "/run/user/1001/netns/netns-46270914-2891-f000-7d15-8990a4d4cc83"
---
>                "SandboxKey": "/run/user/1001/netns/netns-5207f0f9-42b1-771c-4728-09a367c83b2d"
152c152
<                "Hostname": "83efdcf4abd8",
---
>                "Hostname": "67f6fea950cc",
162d161
<                     "TZ=Europe/Zuerich",
165a165
>                     "TZ=Europe/Zuerich",
167c167
<                     "HOSTNAME=83efdcf4abd8"
---
>                     "HOSTNAME=67f6fea950cc"
180c180,183
<                "Labels": null,
---
>                "Labels": {
>                     "PODMAN_SYSTEMD_UNIT": "zigbee2mqtt.service",
>                     "io.containers.autoupdate": "registry"
>                },
182a186,187
>                     "io.podman.annotations.autoremove": "TRUE",
>                     "io.podman.annotations.cid-file": "/run/user/1001/zigbee2mqtt.cid",
189c194
<                     "podman",
---
>                     "/usr/bin/podman",
191a197,199
>                     "--cidfile=/run/user/1001/zigbee2mqtt.cid",
>                     "--replace",
>                     "--rm",
193a202
>                     "-d",
198a208,209
>                     "--label",
>                     "io.containers.autoupdate=registry",
211c222,223
<                "sdNotifyMode": "conmon"
---
>                "sdNotifyMode": "conmon",
>                "sdNotifySocket": "/run/user/1001/systemd/notify"
220c232
<                "ContainerIDFile": "",
---
>                "ContainerIDFile": "/run/user/1001/zigbee2mqtt.cid",
241c253
<                "AutoRemove": false,
---
>                "AutoRemove": true,

I fail to see any relevant difference between those two containers - one works, the other doesn't.

  1. How can the systemd-container be attached? It's currently terminating AutoRemove before I manage to have a look around inside of it. How can we get AutoRemove=false?
  2. I was looking for --group-add keep-groups and found that Annotation="run.oci.keep_original_groups=1" in .container is the equivalent. Leaving this here as a reference.
  3. How can I get device access from the containers starting from systemd/.container?

Steps to reproduce the issue

Start a container that requires keep_original_groups with .container file and try to access the device via said group.

Describe the results you received

Error Message when starting via SystemD Error: Error while opening serialport 'Error: Error: Permission denied, cannot open /dev/ttyACM0'

Describe the results you expected

Both the container started from podman run and .container/systemd should behave the same way and get access to the device.

podman info output

host:
  arch: arm64
  buildahVersion: 1.32.0
  cgroupControllers:
  - cpu
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.8-2.fc39.aarch64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.8, commit: '
  cpuUtilization:
    idlePercent: 97.07
    systemPercent: 2.34
    userPercent: 0.59
  cpus: 4
  databaseBackend: boltdb
  distribution:
    distribution: fedora
    variant: iot
    version: "39"
  eventLogger: journald
  freeLocks: 2037
  hostname: rpi03.pamperspoil.com
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1002
      size: 1
    - container_id: 1
      host_id: 524288
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1001
      size: 1
    - container_id: 1
      host_id: 524288
      size: 65536
  kernel: 6.5.11-300.fc39.aarch64
  linkmode: dynamic
  logDriver: journald
  memFree: 157179904
  memTotal: 898637824
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.8.0-1.fc39.aarch64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.8.0
    package: netavark-1.8.0-2.fc39.aarch64
    path: /usr/libexec/podman/netavark
    version: netavark 1.8.0
  ociRuntime:
    name: crun
    package: crun-1.11.1-1.fc39.aarch64
    path: /usr/bin/crun
    version: |-
      crun version 1.11.1
      commit: 1084f9527c143699b593b44c23555fb3cc4ff2f3
      rundir: /run/user/1001/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: ""
    package: ""
    version: ""
  remoteSocket:
    exists: false
    path: /run/user/1001/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.2-1.fc39.aarch64
    version: |-
      slirp4netns version 1.2.2
      commit: 0ee2d87523e906518d34a6b423271e4826f71faf
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 757592064
  swapTotal: 898625536
  uptime: 114h 17m 16.00s (Approximately 4.75 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /var/home/minion/.config/containers/storage.conf
  containerStore:
    number: 2
    paused: 0
    running: 1
    stopped: 1
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/home/minion/.local/share/containers/storage
  graphRootAllocated: 124588470272
  graphRootUsed: 10328494080
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 2
  runRoot: /tmp/containers-user-1001/containers
  transientStore: false
  volumePath: /var/home/minion/.local/share/containers/storage/volumes
version:
  APIVersion: 4.7.2
  Built: 1698762633
  BuiltTime: Tue Oct 31 14:30:33 2023
  GitCommit: ""
  GoVersion: go1.21.1
  Os: linux
  OsArch: linux/arm64
  Version: 4.7.2

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

setenforce 0 to get SELinux out of the way.

goshansp commented 10 months ago

Inching in on nonworking:

[minion@rpi03 ~]$ systemctl --user start zigbee2mqtt.service
[minion@rpi03 ~]$ podman exec -it zigbee2mqtt /bin/sh
/app # ls -lsatr /dev/ttyACM0 
     0 crw-rw----    1 nobody   nobody    188,   0 Dec  1 08:54 /dev/ttyACM0

working

[minion@rpi03 ~]$ podman exec -it zigbee2mqtt /bin/sh
/app # ls -lsatr /dev/ttyACM0 
     0 crw-rw----    1 nobody   nobody    188,   0 Dec  1 09:00 /dev/ttyACM0

They both look same ... why can it access one and not the other?

goshansp commented 10 months ago

non-working (started from systemD/.container) uid=0(root) gid=0(root) groups=65534(nobody),0(root)

working (started using podman run) uid=0(root) gid=0(root) groups=65534(nobody),0(root)

Suspecting that run.oci.keep_original_groups=1 has different behaviours despite podman inspect looking the same in both scenarios.

goshansp commented 10 months ago

On other systems both scenarios work. The cause of this issue seems to be on the system side.

goshansp commented 10 months ago

After a reboot it is working.