Closed cevich closed 1 year ago
@giuseppe PTAL, is this expected behavior for CGv1/runc with rootless podman?
I don't think that depends from cgroupv1 but rather on the storage driver.
I'd expect the same output when the graphroot is vfs
.
We should skip the test when $(podman --storage-driver vfs info --format "{{.Store.GraphDriverName}}")
is vfs
Can you please check if it is using vfs? It is not clear from the podman info
output above since that is for root. If it is vfs, could we install fuse-overlayfs
?
If it is vfs, could we install fuse-overlayfs?
Ahh, you might be onto something there. It's not explicitly called out in the image-build package install list, I'll get a VM and see if it's there or not.
...well damn, fuse-overlayfs
is there. The problem reproduces under hack/get_ci_vm.sh
so let me do that and see if as the test-user, it's running VFS for some reason.
Ahh, @giuseppe called it. As the rootless user, podman info
shows:
...cut...
store:
configFile: /home/some13481dude/.config/containers/storage.conf
...cut...
graphDriverName: vfs
graphOptions: {}
graphRoot: /home/some13481dude/.local/share/containers/storage
graphRootAllocated: 211116445696
graphRootUsed: 5158154240
...cut...
Oddly enough, there is no user or system storage.conf
specifying VFS. So it must be set by some other means. I checked and /usr/bin/fuse-overlayfs
is definitely there. Hmmmm.
@nalind or @mtrmac either of you have any idea why these new Debian VMs would be selecting VFS for storage by default?
Update: On my Debian VM as root, podman info
shows graphDriverName: overlay
. But I just made a brand-new user (one that's never run any tests or podman-anything). It's info
output also shows graphDriverName: vfs
.
I know absolutely nothing about that problem space, so this is probably not very helpful:
I’d go looking for data about how the driver decision was made in podman --log-level=debug
(maybe only on the initial Podman run in that environment, before it records state?), and if that debug log doesn’t contain the data, I’d suggest that it would be worth adding there.
At least code inspection suggests log entries like
"[graphdriver] trying provided driver %q"
"[graphdriver] using prior storage driver: %s"
(but there seem to be no log entries about the logic actually choosing a driver when none is specified, based on the user-provided or built-in priority list — and, worse, if a driver is on the priority list but its initialization fails, that error is AFAICS not logged. At least the latter part seems quite useful for debugging the behavior).
Sorry @mtrmac I thought this was part of your wheel-house. I appreciate the debug
suggestion though, I'll give that a try and see where it takes me. It seems likely something in the environment is causing it, so maybe that'll show up in the output.
Update: debug
level output from running podman info
as the rootless user:
$ bin/podman --log-level=debug info
INFO[0000] bin/podman filtering at log level debug
DEBU[0000] Called info.PersistentPreRunE(bin/podman --log-level=debug info)
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/some11319dude/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver vfs
...cut...
Mystery solved:
According to @nalind VFS is the hard-wired default for rootless if nothing else is selected.
On Fedora, the containers-common
package supplies a /usr/share/containers/storage.conf
which has driver=overlay
set. On Debian SID there is no such file or package, so the user gets the default VFS
.
@siretart I'm not sure what the Debian policy / common practice is here. The podman "experience" with the VFS
storage driver is sub-optimal. It's definitely the "safe" choice, but will leave new rootless users with a "podman is slow" taste in their mouth. It also differs from the built-in default (overlay
) for root - again it's a safe choice.
In Fedora there's a containers-common
package that places a default /usr/share/containers/storage.conf
file. That file sets (condensed / among other options):
[storage]
driver = "overlay"
[storage.options.overlay]
mountopt = "nodev,metacopy=on"
I just verified manually, having this on Debian results in new rootless users getting the overlay
driver by "default". Is there a containers-common
package or similar mechanism for Debian that would provide best new (rootless) user experience WRT storage driver?
A friendly reminder that this issue had no activity for 30 days.
Issue Description
Using Debian SID to run podman's integration tests this test is failing because the volume device numbers are (unexpectedly) all
0x801
. Running the test manually viahack/bats
reproduces the same results.Steps to reproduce the issue
Steps to reproduce the issue
hack/get_ci_vm.sh sys podman debian-12 rootless host
)hack/bats 070-build
Describe the results you received
Describe the results you expected
Test should pass
podman info output
Podman in a container
No
Privileged Or Rootless
Rootless
Upstream Latest Release
Yes
Additional environment details
Debian GNU/Linux bookworm/sid \n \l
Kernel: 6.1.0-4-cloud-amd64 Cgroups: tmpfs dpkg-query: no packages found matching containers-common dpkg-query: no packages found matching cri-o-runc conmon-2.1.6+ds1-1-amd64 containernetworking-plugins-1.1.1+ds1-3+b2-amd64 criu-3.17.1-2-amd64 crun-1.8-1-amd64 golang-2:1.19~1-amd64 libseccomp2-2.5.4-1+b3-amd64 podman-4.3.1+ds1-5+b2-amd64 runc-1.1.4+ds1-1+b2-amd64 skopeo-1.9.3+ds1-1+b1-amd64 slirp4netns-1.2.0-1-amd64
Additional information
Logs from a run in CI showing the same failure & error.