Closed rasup closed 4 years ago
We have a soft cap of 4096 containers, pods, and volumes by default. If you need to increase this number, you can edit libpod.conf and increase num_locks then run 'podman system renumber'
On Sun, Feb 9, 2020, 14:31 rasup notifications@github.com wrote:
/kind bug
Description podman run fails with error allocating lock for new volume: no space left on device.
Steps to reproduce the issue:
- Build an image:
FROM fedora:31VOLUME /d0 /d1 /d2
the more volumes you create the faster it will fail. With 1025 directories it fails when starting the second container.
- Run in a loop:
container=$(podman run -t -d $image bash) podman stop $container podman rm $container
Describe the results you received: podman run eventually fails with the error message stated above.
Describe the results you expected: I expected it to continue being able to start new containers. docker doesn't seem to have this problem.
Additional information you deem important: If the --rm option to podman run is used (no need for podman rm then) or if podman rm --volumes is used, it behaves as I expect.
podman volume prune makes it able to start new containers again.
Output of podman version:
podman version 1.8.0
Output of podman info --debug:
debug: compiler: gc git commit: "" go version: go1.13.6 podman version: 1.8.0 host: BuildahVersion: 1.13.1 CgroupVersion: v2 Conmon: package: conmon-2.0.10-2.fc31.x86_64 path: /usr/bin/conmon version: 'conmon version 2.0.10, commit: 6b526d9888abb86b9e7de7dfdeec0da98ad32ee0' Distribution: distribution: fedora version: "31" IDMappings: gidmap:
- container_id: 0 host_id: 1000 size: 1
- container_id: 1 host_id: 100000 size: 65536 uidmap:
- container_id: 0 host_id: 1000 size: 1
- container_id: 1 host_id: 100000 size: 65536 MemFree: 550481920 MemTotal: 2083835904 OCIRuntime: name: crun package: crun-0.12.1-1.fc31.x86_64 path: /usr/bin/crun version: |- crun version 0.12.1 commit: df5f2b2369b3d9f36d175e1183b26e5cee55dd0a spec: 1.0.0 +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL SwapFree: 2217734144 SwapTotal: 2217734144 arch: amd64 cpus: 2 eventlogger: journald hostname: fedora31.localdomain kernel: 5.3.11-300.fc31.x86_64 os: linux rootless: true slirp4netns: Executable: /usr/bin/slirp4netns Package: slirp4netns-0.4.0-20.1.dev.gitbbd6f25.fc31.x86_64 Version: |- slirp4netns version 0.4.0-beta.3+dev commit: bbd6f25c70d5db2a1cd3bfb0416a8db99a75ed7e uptime: 4h 22m 55.63s (Approximately 0.17 days) registries: search:
- docker.io
- registry.fedoraproject.org
- registry.access.redhat.com
- registry.centos.org
- quay.io store: ConfigFile: /home/vagrant/.config/containers/storage.conf ContainerStore: number: 0 GraphDriverName: overlay GraphOptions: overlay.mount_program: Executable: /usr/bin/fuse-overlayfs Package: fuse-overlayfs-0.7.5-2.fc31.x86_64 Version: |- fusermount3 version: 3.6.2 fuse-overlayfs: version 0.7.5 FUSE library version 3.6.2 using FUSE kernel interface version 7.29 GraphRoot: /home/vagrant/.local/share/containers/storage GraphStatus: Backing Filesystem: xfs Native Overlay Diff: "false" Supports d_type: "true" Using metacopy: "false" ImageStore: number: 8 RunRoot: /run/user/1000/containers VolumePath: /home/vagrant/.local/share/containers/storage/volumes
Package info (e.g. output of rpm -q podman or apt list podman):
podman-1.8.0-2.fc31.x86_64
Additional environment details (AWS, VirtualBox, physical, etc.): Virtualbox, Fedora 31
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/containers/libpod/issues/5136?email_source=notifications&email_token=AB3AOCC44D75JCKFMVWI2ZDRCBK2PA5CNFSM4KSDZW3KYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4IMC4WBA, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB3AOCGTTZ5PXMJIROUQRADRCBK2PANCNFSM4KSDZW3A .
Closing as Matt provided the answer. The behavior is expected.
/kind bug
Description
podman run
fails witherror allocating lock for new volume: no space left on device
.Steps to reproduce the issue:
Build an image:
the more volumes you create the faster it will fail. With 1025 directories it fails when starting the second container.
Run in a loop:
Describe the results you received:
podman run
eventually fails with the error message stated above.Describe the results you expected: I expected it to continue being able to start new containers.
docker
doesn't seem to have this problem.Additional information you deem important: If the
--rm
option topodman run
is used (no need forpodman rm
then) or ifpodman rm --volumes
is used, it behaves as I expect.podman volume prune
makes it able to start new containers again.Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Additional environment details (AWS, VirtualBox, physical, etc.): Virtualbox, Fedora 31