Closed arctic-alpaca closed 3 years ago
Is this new behavior (IE only appeared in a recent release)? How many containers, images, and volumes do you have for the user running Podman?
Is this new behavior (IE only appeared in a recent release)?
I only started using podman with version 3.0.1 and can't speak for older versions sorry.
How many containers, images, and volumes do you have for the user running Podman?
I wasn't actively using podman while testing this issue so no containers were running. I'm not entirely sure how to answer this question properly, so I hope this commands tell you, what you need to know. I'm happy to supply more information if needed.
xyz@DESKTOP:/mnt/c/Users/xyz$ podman volume ls
xyz@DESKTOP:/mnt/c/Users/xyz$ podman images list -a
REPOSITORY TAG IMAGE ID CREATED SIZE
xyz@DESKTOP:/mnt/c/Users/xyz$ podman container list -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
266134916e56 docker.io/library/hello-world:latest /hello 9 days ago Exited (0) 9 days ago naughty_mcnulty
9881302aaf0d docker.io/library/hello-world:latest --cgroup-manager ... 9 days ago Created silly_zhukovsky
1c16c369e94a docker.io/library/hello-world:latest /hello 9 days ago Exited (0) 9 days ago pedantic_euler
f0bc33420d6f docker.io/selenium/standalone-chrome:3.141.59 /opt/bin/entry_po... 9 days ago Exited (130) 9 days ago 0.0.0.0:4444->4444/tcp distracted_lamport
0e82e2cd19fe docker.io/selenium/standalone-chrome:3.141.59 /opt/bin/entry_po... 9 days ago Exited (130) 9 days ago 0.0.0.0:4444->4444/tcp happy_zhukovsky
3f02db778ce8 docker.io/selenium/standalone-chrome:3.141.59 /opt/bin/entry_po... 7 days ago Exited (130) 6 days ago 0.0.0.0:4444->4444/tcp bold_knuth
3a5d3bf7aab0 docker.io/selenium/standalone-chrome:3.141.59 /opt/bin/entry_po... 6 days ago Exited (130) 5 days ago 0.0.0.0:4444->4444/tcp naughty_bassi
xyz@DESKTOP:/mnt/c/Users/xyz$
No images listed, but container ls
does show containers... that doesn't make any sense. Are the containers still usable (IE, does podman start
on any of them work without error)?
Every container starts up without errors from podman or the container itself besides
9881302aaf0d docker.io/library/hello-world:latest --cgroup-manager ... 9 days ago Created silly_zhukovsky
This container returns
xyz@DESKTOP-3COMHL6:/mnt/c/Users/xyz$ podman start -a 9881302aaf0d
Error: unable to start container 9881302aaf0decd6f722da7a85f6931123ffdc1a3076fd6f54954a3788377396: executable file `--cgroup-manager` not found in $PATH: No such file or directory: OCI not found
I think this is a container I passed --cgroup-manager=cgroupfs
when running it when following this guide.
Ah - that's a simple fix, you put the --cgroup-manager
flag too late (after the image name) in the command. So podman run hello-world --cgroup-manager=cgroupfs
treats the flag as an argument to the container command - podman run --cgroup-manager=cgroupfs hello-world
treats it as an argument to Podman.
Doesn't really expose the underlying issue here, though. Something is going on with image storage. @vrothberg @rhatdan @baude Ever seen something like this before - we clearly have images (containers are running) but podman image list
shows nothing.
I just realized I typed podman images list -a
which lead to no results. podman image list -a
on the other hand shows the images properly:
xyz@DESKTOP:/mnt/c/Users/xyz$ podman image list -a
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/selenium/standalone-chrome 3.141.59 efa240b85d81 5 weeks ago 1.04 GB
docker.io/library/hello-world latest d1165f221234 6 weeks ago 20 kB
So no error from images list
? That's definitely a bug, invalid commands need to error.
To the original bug: the Selenium image is fairly large at 1gb, but not exceptional so, and it's only got ~20 layers. I see nothing obvious here that would cause an excessive amount of time be taken on podman system df
.
So no error from images list?
Yes, this is the complete output:
xyz@DESKTOP:/mnt/c/Users/xyz$ podman images list
REPOSITORY TAG IMAGE ID CREATED SIZE
xyz@DESKTOP:/mnt/c/Users/xyz$
Alright. podman images
is the original command for listing images (before the podman image
alias existed). What I did not realize is that podman image
accepts an argument, the name of the image to list... So your command was trying to list all images named list
. This is a little confusing, but unfortunately I think it's been baked in for so long we can't really change it.
Just to confirm that this is working correctly and just as you said:
xyz@DESKTOP:/mnt/c/Users/xyz$ podman images hello
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/library/hello-world latest d1165f221234 6 weeks ago 20 kB
I'm also seeing the same thing on Fedora 32(will be upgrading as soon as I can).
❯ podman version
Version: 2.2.1
API Version: 2.1.0
Go Version: go1.14.10
Built: Tue Dec 8 09:37:43 2020
OS/Arch: linux/amd64
❯ podman info -D
host:
arch: amd64
buildahVersion: 1.18.0
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.0.27-1.fc32.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.0.27, commit: 253f230b3f653ff8ed47efbfffa52f0ae3f1820d'
cpus: 88
distribution:
distribution: fedora
version: "32"
eventLogger: journald
hostname: *redacted*
idMappings:
gidmap:
- container_id: 0
host_id: 20439
size: 1
- container_id: 1
host_id: 200000
size: 100000
uidmap:
- container_id: 0
host_id: 20439
size: 1
- container_id: 1
host_id: 200000
size: 100000
kernel: 5.11.16-100.fc32.x86_64
linkmode: dynamic
memFree: 399182725120
memTotal: 404299915264
ociRuntime:
name: crun
package: crun-0.17-1.fc32.x86_64
path: /usr/bin/crun
version: |-
crun version 0.17
commit: 0e9229ae34caaebcb86f1fde18de3acaf18c6d9a
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
remoteSocket:
path: /run/user/20439/podman/podman.sock
rootless: true
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.1.8-1.fc32.x86_64
version: |-
slirp4netns version 1.1.8
commit: d361001f495417b880f20329121e3aa431a8f90f
libslirp: 4.3.1
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.0
swapFree: 34359734272
swapTotal: 34359734272
uptime: 1h 50m 9.47s (Approximately 0.04 days)
registries:
search:
- registry.fedoraproject.org
- registry.access.redhat.com
- registry.centos.org
- docker.io
store:
configFile: /home/ccatlett/.config/containers/storage.conf
containerStore:
number: 9
paused: 0
running: 0
stopped: 9
graphDriverName: overlay
graphOptions:
overlay.mount_program:
Executable: /usr/bin/fuse-overlayfs
Package: fuse-overlayfs-1.4.0-1.fc32.x86_64
Version: |-
fusermount3 version: 3.9.1
fuse-overlayfs: version 1.4
FUSE library version 3.9.1
using FUSE kernel interface version 7.31
graphRoot: /home/ccatlett/.local/share/containers/storage
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "false"
imageStore:
number: 68
runRoot: /run/user/20439/containers
volumePath: /home/ccatlett/.local/share/containers/storage/volumes
version:
APIVersion: 2.1.0
Built: 1607438263
BuiltTime: Tue Dec 8 09:37:43 2020
GitCommit: ""
GoVersion: go1.14.10
OsArch: linux/amd64
Version: 2.2.1
A friendly reminder that this issue had no activity for 30 days.
@giuseppe Is this improved with the latest fuse-overlay changes?
This was an issue in older c/storage versions. It is fixed in the last version
Closing since it is fixed in the main branch.
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description Hi, Rootless
podman system df
takes roughly 15 minutes to return a result. Same behaviour with the API. When using sudo, the result is almost instantaneous as expected, same for the api if started with sudo.Steps to reproduce the issue:
podman system df
as non-root user on WSL2 Debian 10OR
./podman system service unix:///home/`whoami`/podman.sock --log-level=debug --time=500
curl --unix-socket /home/`whoami`/podman.sock http://d/v3.0.0/libpod/system/df
Describe the results you received: Nothing gets output, podman runs continuously and doesn't return until roughly 15 minutes later.
ps aux
showspodman system df -v
running and using quite a bit of CPU.Using the api, this is the log output by the api process (complete log until the call returned).
Eventual result of API call:
Eventual result of command:
Describe the results you expected: The command should return similarly fast as if the command would be run as root.
Additional information you deem important (e.g. issue happens only occasionally):
Tested with podman 3.0.1 and podman 3.2.0-dev static build from CI.
Output of
podman version
:Output of
podman info --debug
:Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?
Yes
Additional environment details (AWS, VirtualBox, physical, etc.): Windows 10, WSL2, Debian 10, x64