containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
24.02k stars 2.43k forks source link

podman run fails with `--rm` flag #19679

Closed Fale closed 1 year ago

Fale commented 1 year ago

Issue Description

I had a Fedora IoT installed with a container running every hour via quadlet. The container mounted a couple of folders to write content into them. Tonight the disk (btrfs) filled up and the container failed. I've therefore proceeded to free some space and now I get errors every time I run a container with the --rm option. If I remove the --rm option, everything works properly.

Steps to reproduce the issue

Steps to reproduce the issue

  1. (probably) get a btrfs volume full with podman trying to write to it
  2. podman run --rm hello-world

Describe the results you received

[fale@mbs0 ~]$ podman run --rm hello-world
ERRO[0000] Unmounting /var/home/fale/.local/share/containers/storage/overlay/4ba6619ce6130ee9b88f49cb37a8d56feadf37aa83d74cc9b864c871ad1dcf64/merged: invalid argument 
Error: mounting storage for container c8ebc0c4785d189c3df8e1b3434f974a2601a4a276b6b9a6a342fb041e6879d4: creating overlay mount to /var/home/fale/.local/share/containers/storage/overlay/4ba6619ce6130ee9b88f49cb37a8d56feadf37aa83d74cc9b864c871ad1dcf64/merged, mount_data="lowerdir=/var/home/fale/.local/share/containers/storage/overlay/l/476PP3JDVQ5BYGMADY2AGFS65H,upperdir=/var/home/fale/.local/share/containers/storage/overlay/4ba6619ce6130ee9b88f49cb37a8d56feadf37aa83d74cc9b864c871ad1dcf64/diff,workdir=/var/home/fale/.local/share/containers/storage/overlay/4ba6619ce6130ee9b88f49cb37a8d56feadf37aa83d74cc9b864c871ad1dcf64/work,,userxattr,volatile,context=\"system_u:object_r:container_file_t:s0:c268,c569\"": input/output error
[fale@mbs0 ~]$ 

Describe the results you expected

Podman to execute cleanly, as it does removing the --rm option.

podman info output

host:
  arch: arm64
  buildahVersion: 1.31.0
  cgroupControllers:
  - cpu
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.7-2.fc38.aarch64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.7, commit: '
  cpuUtilization:
    idlePercent: 90.76
    systemPercent: 5.39
    userPercent: 3.85
  cpus: 4
  databaseBackend: boltdb
  distribution:
    distribution: fedora
    variant: iot
    version: "38"
  eventLogger: journald
  freeLocks: 2046
  hostname: mbs0.n.fwan.it
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 6.4.10-200.fc38.aarch64
  linkmode: dynamic
  logDriver: journald
  memFree: 449773568
  memTotal: 4065251328
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.7.0-1.fc38.aarch64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.7.0
    package: netavark-1.7.0-1.fc38.aarch64
    path: /usr/libexec/podman/netavark
    version: netavark 1.7.0
  ociRuntime:
    name: crun
    package: crun-1.8.6-1.fc38.aarch64
    path: /usr/bin/crun
    version: |-
      crun version 1.8.6
      commit: 73f759f4a39769f60990e7d225f561b4f4f06bcf
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: ""
    package: ""
    version: ""
  remoteSocket:
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-12.fc38.aarch64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 3963351040
  swapTotal: 4064276480
  uptime: 43h 21m 27.00s (Approximately 1.79 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /var/home/fale/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/home/fale/.local/share/containers/storage
  graphRootAllocated: 4000767283200
  graphRootUsed: 3102973833216
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 2
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /var/home/fale/.local/share/containers/storage/volumes
version:
  APIVersion: 4.6.0
  Built: 1689942161
  BuiltTime: Fri Jul 21 12:22:41 2023
  GitCommit: ""
  GoVersion: go1.20.6
  Os: linux
  OsArch: linux/arm64
  Version: 4.6.0

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

No

Additional environment details

The system is running on an ARM SBC.

Additional information

The issue appears with any containers, if --rm is present, and with no container if --rm is not present

mheon commented 1 year ago

Does every podman run fail, or just commands that include --rm? This doesn't seem like an error that would be specific to --rm

Fale commented 1 year ago

Only --rm ones...

podman run --rm hello-world fails

podman run hello-world works

Containers with attached volumes have the same behavior

Fale commented 1 year ago

Interestingly enough, after a reboot, the problem disappeared and I'm not able to replicate the behavior

rhatdan commented 1 year ago

Ok reopen if it comes back.