Closed cgruver closed 1 year ago
FWIW, It looks like podman is defaulting to fuse-overlayfs on a plain CentOS Stream system too. Not container in container.
Kernel: 5.14.0-205.el9.x86_64
podman version
Client: Podman Engine
Version: 4.3.1
API Version: 4.3.1
Go Version: go1.19.2
Built: Mon Nov 28 07:21:08 2022
OS/Arch: linux/amd64
podman info:
host:
arch: amd64
buildahVersion: 1.28.0
cgroupControllers:
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.1.5-1.el9.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.1.5, commit: 48adb81a22c26f0660f0f37d984baebe7b9ade98'
cpuUtilization:
idlePercent: 99.88
systemPercent: 0.06
userPercent: 0.06
cpus: 4
distribution:
distribution: '"centos"'
version: "9"
eventLogger: file
hostname: dev-host
idMappings:
gidmap:
- container_id: 0
host_id: 100
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 5.14.0-205.el9.x86_64
linkmode: dynamic
logDriver: k8s-file
memFree: 30525919232
memTotal: 33271230464
networkBackend: netavark
ociRuntime:
name: crun
package: crun-1.7.2-1.el9.x86_64
path: /usr/bin/crun
version: |-
crun version 1.7.2
commit: 0356bf4aff9a133d655dc13b1d9ac9424706cac4
rundir: /run/user/1000/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
remoteSocket:
path: /run/user/1000/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.2.0-2.el9.x86_64
version: |-
slirp4netns version 1.2.0
commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
libslirp: 4.4.0
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.2
swapFree: 16844320768
swapTotal: 16844320768
uptime: 169h 17m 37.00s (Approximately 7.04 days)
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
volume:
- local
registries:
search:
- registry.access.redhat.com
- registry.redhat.io
- docker.io
store:
configFile: /home/cgruver/.config/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: overlay
graphOptions: {}
graphRoot: /home/cgruver/.local/share/containers/storage
graphRootAllocated: 481321295872
graphRootUsed: 7043735552
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "true"
Supports d_type: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 12
runRoot: /run/user/1000/containers
volumePath: /home/cgruver/.local/share/containers/storage/volumes
version:
APIVersion: 4.3.1
Built: 1669638068
BuiltTime: Mon Nov 28 07:21:08 2022
GitCommit: ""
GoVersion: go1.19.2
Os: linux
OsArch: linux/amd64
Version: 4.3.1
After running podman system reset
the .has-mount-program
file is recreated on the next podman command.
Here is the same build in an Eclipse Che workspace with debug:
Possibly interesting entries:
...
WARN[0000] "/" is not a shared mount, this could cause issues or missing mounts with rootless containers
...
Using graph driver overlay
DEBU[0000] Using graph root /home/user/.local/share/containers/storage
DEBU[0000] Using run root /tmp/podman-run-1000690000/containers
DEBU[0000] Using static dir /home/user/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /tmp/podman-run-1000690000/libpod/tmp
DEBU[0000] Using volume path /home/user/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] Unable to create kernel-style whiteout: operation not permitted
DEBU[0000] backingFs=overlayfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false
...
Full Log:
bash-5.1$ podman --log-level debug build -t test:test .
INFO[0000] podman filtering at log level debug
DEBU[0000] Called build.PersistentPreRunE(podman --log-level debug build -t test:test .)
DEBU[0000] Merged system config "/usr/share/containers/containers.conf"
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/user/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] systemd-logind: dial unix /var/run/dbus/system_bus_socket: connect: no such file or directory
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/user/.local/share/containers/storage
DEBU[0000] Using run root /tmp/podman-run-1000690000/containers
DEBU[0000] Using static dir /home/user/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /tmp/podman-run-1000690000/libpod/tmp
DEBU[0000] Using volume path /home/user/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] Not configuring container store
DEBU[0000] Initializing event backend file
DEBU[0000] Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/crun"
WARN[0000] "/" is not a shared mount, this could cause issues or missing mounts with rootless containers
INFO[0000] podman filtering at log level debug
DEBU[0000] Called build.PersistentPreRunE(podman --log-level debug build -t test:test .)
DEBU[0000] Merged system config "/usr/share/containers/containers.conf"
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/user/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] systemd-logind: dial unix /var/run/dbus/system_bus_socket: connect: no such file or directory
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/user/.local/share/containers/storage
DEBU[0000] Using run root /tmp/podman-run-1000690000/containers
DEBU[0000] Using static dir /home/user/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /tmp/podman-run-1000690000/libpod/tmp
DEBU[0000] Using volume path /home/user/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] Unable to create kernel-style whiteout: operation not permitted
DEBU[0000] backingFs=overlayfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false
DEBU[0000] Initializing event backend file
DEBU[0000] Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/crun"
DEBU[0000] Successfully loaded 1 networks
DEBU[0000] Podman detected system restart - performing state refresh
INFO[0000] Setting parallel job count to 37
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0000] base for stage 0: "quay.io/cgruver0/che/podman-basic:latest"
DEBU[0000] FROM "quay.io/cgruver0/che/podman-basic:latest"
STEP 1/1: FROM quay.io/cgruver0/che/podman-basic:latest
DEBU[0000] Pulling image quay.io/cgruver0/che/podman-basic:latest (policy: missing)
DEBU[0000] Looking up image "quay.io/cgruver0/che/podman-basic:latest" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0000] Trying "quay.io/cgruver0/che/podman-basic:latest" ...
DEBU[0000] Trying "quay.io/cgruver0/che/podman-basic:latest" ...
DEBU[0000] Trying "quay.io/cgruver0/che/podman-basic:latest" ...
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf"
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/000-shortnames.conf"
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/001-rhel-shortnames.conf"
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/002-rhel-shortnames-overrides.conf"
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0000] Attempting to pull candidate quay.io/cgruver0/che/podman-basic:latest for quay.io/cgruver0/che/podman-basic:latest
DEBU[0000] parsed reference into "[overlay@/home/user/.local/share/containers/storage+/tmp/podman-run-1000690000/containers]quay.io/cgruver0/che/podman-basic:latest"
Trying to pull quay.io/cgruver0/che/podman-basic:latest...
DEBU[0000] Copying source image //quay.io/cgruver0/che/podman-basic:latest to destination image [overlay@/home/user/.local/share/containers/storage+/tmp/podman-run-1000690000/containers]quay.io/cgruver0/che/podman-basic:latest
DEBU[0000] Using registries.d directory /etc/containers/registries.d
DEBU[0000] Trying to access "quay.io/cgruver0/che/podman-basic:latest"
DEBU[0000] No credentials matching quay.io/cgruver0/che/podman-basic found in /tmp/podman-run-1000690000/containers/auth.json
DEBU[0000] No credentials matching quay.io/cgruver0/che/podman-basic found in /home/user/.config/containers/auth.json
DEBU[0000] No credentials matching quay.io/cgruver0/che/podman-basic found in /home/user/.docker/config.json
DEBU[0000] No credentials matching quay.io/cgruver0/che/podman-basic found in /home/user/.dockercfg
DEBU[0000] No credentials for quay.io/cgruver0/che/podman-basic found
DEBU[0000] No signature storage configuration found for quay.io/cgruver0/che/podman-basic:latest, using built-in default file:///home/user/.local/share/containers/sigstore
DEBU[0000] Looking for TLS certificates and private keys in /etc/docker/certs.d/quay.io
DEBU[0000] GET https://quay.io/v2/
DEBU[0000] Ping https://quay.io/v2/ status 401
DEBU[0000] GET https://quay.io/v2/auth?scope=repository%3Acgruver0%2Fche%2Fpodman-basic%3Apull&service=quay.io
DEBU[0000] Increasing token expiration to: 60 seconds
DEBU[0000] GET https://quay.io/v2/cgruver0/che/podman-basic/manifests/latest
DEBU[0000] Content-Type from manifest GET is "application/vnd.oci.image.manifest.v1+json"
DEBU[0000] Using blob info cache at /home/user/.local/share/containers/cache/blob-info-cache-v1.boltdb
DEBU[0000] IsRunningImageAllowed for image docker:quay.io/cgruver0/che/podman-basic:latest
DEBU[0000] Using default policy section
DEBU[0000] Requirement 0: allowed
DEBU[0000] Overall: allowed
DEBU[0000] Downloading /v2/cgruver0/che/podman-basic/blobs/sha256:4c826aaf39f88273015c72549551f7cd559a0ab6d3c9f6ab02ab622a4d0ebf85
DEBU[0000] GET https://quay.io/v2/cgruver0/che/podman-basic/blobs/sha256:4c826aaf39f88273015c72549551f7cd559a0ab6d3c9f6ab02ab622a4d0ebf85
Getting image source signatures
DEBU[0001] Reading /home/user/.local/share/containers/sigstore/cgruver0/che/podman-basic@sha256=6e0d34b764affe8f865878c47b6ff7637d8cad07532dfcf8329f8659773fcb2c/signature-1
DEBU[0001] Not looking for sigstore attachments: disabled by configuration
DEBU[0001] Manifest has MIME type application/vnd.oci.image.manifest.v1+json, ordered candidate list [application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.docker.distribution.manifest.v1+json]
DEBU[0001] ... will first try using the original manifest unmodified
DEBU[0001] Checking if we can reuse blob sha256:545f20c09f6464668d2754516b830b59a804925ca51897f4ea562a9974f1797f: general substitution = true, compression for MIME type "application/vnd.oci.image.layer.v1.tar+gzip" = true
DEBU[0001] Checking if we can reuse blob sha256:df72cb0b47c287f5908f92053a7174e3ddcbc2f24c0e6a24d8dbc8d90030291b: general substitution = true, compression for MIME type "application/vnd.oci.image.layer.v1.tar+gzip" = true
DEBU[0001] Failed to retrieve partial blob: blob type not supported for partial retrieval
Copying blob 545f20c09f64 [--------------------------------------] 0.0b / 107.3MiB (skipped: 0.0b = 0.00%)
DEBU[0001] Downloading /v2/cgruver0/che/podman-basic/blobs/sha256:545f20c09f6464668d2754516b830b59a804925ca51897f4ea562a9974f1797f
DEBU[0001] GET https://quay.io/v2/cgruver0/che/podman-basic/blobs/sha256:545f20c09f6464668d2754516b830b59a804925ca51897f4ea562a9974f1797f
DEBU[0001] Failed to retrieve partial blob: blob type not supported for partial retrieval
DEBU[0001] Downloading /v2/cgruver0/che/podman-basic/blobs/sha256:df72cb0b47c287f5908f92053a7174e3ddcbc2f24c0e6a24d8dbc8d90030291b
Copying blob 545f20c09f64 [--------------------------------------] 0.0b / 107.3MiB
Copying blob df72cb0b47c2 [--------------------------------------] 0.0b / 55.3MiB (skipped: 0.0b = 0.00%)
Copying blob 545f20c09f64 [--------------------------------------] 0.0b / 107.3MiB
Copying blob df72cb0b47c2 [--------------------------------------] 36.6KiB / 55.3MiB
Copying blob 545f20c09f64 [===============>----------------------] 46.5MiB / 107.3MiB
Copying blob 545f20c09f64 done
Copying blob 545f20c09f64 done
Copying blob df72cb0b47c2 done
DEBU[0022] No compression detected
DEBU[0022] Compression change for blob sha256:4c826aaf39f88273015c72549551f7cd559a0ab6d3c9f6ab02ab622a4d0ebf85 ("application/vnd.oci.image.config.v1+json") not supported
DEBU[0022] Using original blob without modification
Copying config 4c826aaf39 done
Writing manifest to image destination
Storing signatures
DEBU[0022] setting image creation date to 2023-01-10 14:30:06.343291516 +0000 UTC
DEBU[0022] created new image ID "4c826aaf39f88273015c72549551f7cd559a0ab6d3c9f6ab02ab622a4d0ebf85"
DEBU[0022] saved image metadata "{\"signatures-sizes\":{\"sha256:6e0d34b764affe8f865878c47b6ff7637d8cad07532dfcf8329f8659773fcb2c\":[]}}"
DEBU[0022] added name "quay.io/cgruver0/che/podman-basic:latest" to image "4c826aaf39f88273015c72549551f7cd559a0ab6d3c9f6ab02ab622a4d0ebf85"
DEBU[0022] Pulled candidate quay.io/cgruver0/che/podman-basic:latest successfully
DEBU[0022] Looking up image "4c826aaf39f88273015c72549551f7cd559a0ab6d3c9f6ab02ab622a4d0ebf85" in local containers storage
DEBU[0022] Trying "4c826aaf39f88273015c72549551f7cd559a0ab6d3c9f6ab02ab622a4d0ebf85" ...
DEBU[0022] parsed reference into "[overlay@/home/user/.local/share/containers/storage+/tmp/podman-run-1000690000/containers]@4c826aaf39f88273015c72549551f7cd559a0ab6d3c9f6ab02ab622a4d0ebf85"
DEBU[0022] Found image "4c826aaf39f88273015c72549551f7cd559a0ab6d3c9f6ab02ab622a4d0ebf85" as "4c826aaf39f88273015c72549551f7cd559a0ab6d3c9f6ab02ab622a4d0ebf85" in local containers storage
DEBU[0022] Found image "4c826aaf39f88273015c72549551f7cd559a0ab6d3c9f6ab02ab622a4d0ebf85" as "4c826aaf39f88273015c72549551f7cd559a0ab6d3c9f6ab02ab622a4d0ebf85" in local containers storage ([overlay@/home/user/.local/share/containers/storage+/tmp/podman-run-1000690000/containers]@4c826aaf39f88273015c72549551f7cd559a0ab6d3c9f6ab02ab622a4d0ebf85)
DEBU[0022] exporting opaque data as blob "sha256:4c826aaf39f88273015c72549551f7cd559a0ab6d3c9f6ab02ab622a4d0ebf85"
DEBU[0022] exporting opaque data as blob "sha256:4c826aaf39f88273015c72549551f7cd559a0ab6d3c9f6ab02ab622a4d0ebf85"
DEBU[0022] exporting opaque data as blob "sha256:4c826aaf39f88273015c72549551f7cd559a0ab6d3c9f6ab02ab622a4d0ebf85"
DEBU[0022] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0022] [graphdriver] trying provided driver "overlay"
DEBU[0022] overlay: storage already configured with a mount-program
DEBU[0022] backingFs=overlayfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false
DEBU[0022] Check for idmapped mounts support create mapped mount: invalid argument
DEBU[0022] overlay: mount_data=lowerdir=/home/user/.local/share/containers/storage/overlay/l/47MJSR3GV3DAJXUFANWSTBNC7H:/home/user/.local/share/containers/storage/overlay/l/KHNLLU2AIGALFA6KIFMPABBMMQ,upperdir=/home/user/.local/share/containers/storage/overlay/12b0047ed9aa887c512293f4862f0ddaddd91d3ae2d8122ed8c7ce580b2b3c03/diff,workdir=/home/user/.local/share/containers/storage/overlay/12b0047ed9aa887c512293f4862f0ddaddd91d3ae2d8122ed8c7ce580b2b3c03/work,,volatile
ERRO[0022] Unmounting /home/user/.local/share/containers/storage/overlay/12b0047ed9aa887c512293f4862f0ddaddd91d3ae2d8122ed8c7ce580b2b3c03/merged: invalid argument
Error: mounting new container: mounting build container "48f75a8309ccd948b598d3e019d0cccc46b073e80cc52223dc51758f1f684267": creating overlay mount to /home/user/.local/share/containers/storage/overlay/12b0047ed9aa887c512293f4862f0ddaddd91d3ae2d8122ed8c7ce580b2b3c03/merged, mount_data="lowerdir=/home/user/.local/share/containers/storage/overlay/l/47MJSR3GV3DAJXUFANWSTBNC7H:/home/user/.local/share/containers/storage/overlay/l/KHNLLU2AIGALFA6KIFMPABBMMQ,upperdir=/home/user/.local/share/containers/storage/overlay/12b0047ed9aa887c512293f4862f0ddaddd91d3ae2d8122ed8c7ce580b2b3c03/diff,workdir=/home/user/.local/share/containers/storage/overlay/12b0047ed9aa887c512293f4862f0ddaddd91d3ae2d8122ed8c7ce580b2b3c03/work,,volatile": using mount program /usr/bin/fuse-overlayfs: unknown argument ignored: lazytime
unknown argument ignored:
fuse: device not found, try 'modprobe fuse' first
fuse-overlayfs: cannot mount: No such file or directory
: exit status 1
DEBU[0022] Failed to add pause process to systemd sandbox cgroup: exec: "dbus-launch": executable file not found in $PATH
@giuseppe PTAL
that seems expected since overlay on top of overlay won't work. You can use a volume to store the graph storage inside the first container and make sure its file system is not overlay
.
OK. So, for now is VFS really the only available option for OpenShift container-in-container? It works. It's just slow. ;-)
Also... if that graph storage idea is a possible solution, do you have a how-to or RTFM that you could point me to? I understood the words, but have no idea what you are talking about. :-)
No you have to mount a volume on the containers/storage directory to get it to work.
But you will have to run a more privileged container then the default security of OpenShift.
Thanks guys! We'll stick with VFS for now.
I'll close this and go read Dan's new book. ;-)
Hi, can we reopen it? I use podman inside a container and can't make it work with native overlayfs. The Kernel version is 5.12.2.
Others have made this work, so this is not a Podman issue. What error are you seeing? Did you setup a volume on top of containers/storage?
Hi, I run this setup on my local machine (Kernel v6), and native overlay is not enabled. In production, we use podman inside a firecracker instance with a mounted volume for containers/storage (Kernel 5.12.2). But I can't make it work either locally or in production.
podman info
inside my local container
host:
arch: amd64
buildahVersion: 1.28.0
cgroupControllers:
- cpuset
- cpu
- io
- memory
- hugetlb
- pids
- rdma
- misc
cgroupManager: cgroupfs
cgroupVersion: v2
conmon:
package: conmon-2.1.5-1.fc37.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.1.5, commit: '
cpuUtilization:
idlePercent: 98.2
systemPercent: 0.44
userPercent: 1.36
cpus: 32
distribution:
distribution: fedora
variant: container
version: "37"
eventLogger: file
hostname: a43ec34ce754
idMappings:
gidmap:
- container_id: 0
host_id: 0
size: 1
uidmap:
- container_id: 0
host_id: 0
size: 1
kernel: 6.0.12-76060006-generic
linkmode: dynamic
logDriver: k8s-file
memFree: 51696779264
memTotal: 67346718720
networkBackend: netavark
ociRuntime:
name: crun
package: crun-1.8-1.fc37.x86_64
path: /usr/bin/crun
version: |-
crun version 1.8
commit: 0356bf4aff9a133d655dc13b1d9ac9424706cac4
rundir: /run/user/0/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
os: linux
remoteSocket:
path: /run/user/0/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_AUDIT_WRITE,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_MKNOD,CAP_NET_BIND_SERVICE,CAP_NET_RAW,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.2.0-8.fc37.x86_64
version: |-
slirp4netns version 1.2.0
commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
libslirp: 4.7.0
SLIRP_CONFIG_VERSION_MAX: 4
libseccomp: 2.5.3
swapFree: 21474299904
swapTotal: 21474299904
uptime: 0h 39m 22.00s
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
volume:
- local
registries:
search:
- registry.fedoraproject.org
- registry.access.redhat.com
- docker.io
- quay.io
store:
configFile: /root/.config/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: overlay
graphOptions:
overlay.imagestore: /var/lib/containers/share
overlay.mountopt: nodev
graphRoot: /var/lib/containers/storage
graphRootAllocated: 1958315118592
graphRootUsed: 214738485248
graphStatus:
Backing Filesystem: overlayfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/lib/containers/tmp
imageStore:
number: 0
runRoot: /run/containers/storage
volumePath: /var/lib/containers/storage/volumes
version:
APIVersion: 4.3.1
Built: 1668178887
BuiltTime: Fri Nov 11 15:01:27 2022
GitCommit: ""
GoVersion: go1.19.2
Os: linux
OsArch: linux/amd64
Version: 4.3.1
I was able to fix it with https://github.com/containers/buildah/issues/3666#issuecomment-1351992335
Issue Description
Environment Info:
OpenShift (OKD) 4.12 on SCOS (CentOS Stream Core OS)
Eclipse Che 7.58.0 enabled for rootless container builds
CoreOS kernel version: 5.14.0-200.el9.x86_64
Podman 4.3.1
I am testing using podman for rootless container builds within Eclipse Che (OpenShift Dev Spaces)
VFS is painfully slow, especially for container in container.
Fuse-overlayfs does not work in rootless container in container on OpenShift
I would like to leverage native overlayfs support but cannot seem to find any documentation on how to enable it.
This really good post from 2021 is the nearest to a guide that I have found: https://www.redhat.com/sysadmin/podman-rootless-overlay
Attempting a basic podman build fails because it tries to default to
fuse-overlayfs
Steps to reproduce the issue
Steps to reproduce the issue
podman info
podman pull quay.io/cgruver0/che/podman-basic:latest
echo "FROM quay.io/cgruver0/che/podman-basic:latest" > Dockerfile
podman build -t test:test .
.has-mount-program
file existsIf you execute
podman system reset
and try the above steps again, the same results are observed and the.has-mount-program
file is recreated.Describe the results you received
podman build -t test:test .
Describe the results you expected
Successful image build
podman info output
Podman in a container
Yes
Privileged Or Rootless
Rootless
Upstream Latest Release
Yes
Additional environment details
OpenShift (OKD) 4.12 on SCOS (CentOS Stream Core OS)
Eclipse Che 7.58.0 enabled for rootless container builds
CoreOS kernel version: 5.14.0-200.el9.x86_64
Additional information
Dockerfile for the
podman
container: