Closed gbhushan86 closed 3 years ago
Is fsync=0 a supported overlay flag?
@rhvgoyal PTAL
configured the same for aws/azure openshift with same podman version and storage configuration...works on other cloud providers . having issue while working on ibm cloud.
Is fsync=0 a supported overlay flag?
configured the same for aws/azure openshift with same podman version and storage configuration...works on other cloud providers . having issue while working on ibm cloud.
If you removed that flag did it work?
If you removed that flag did it work?
Tried after removing the flag it didnt work
If you build the image outside of the container using podman build
or buildah bud
does it work?
If you build the image outside of the container using
podman build
orbuildah bud
does it work?
Tried building image using podman/buildah and buildctl tool from an bastion ubuntu instance its building the image.But the same is not happening from openshift deployed in IBM cloud.
Running podman build from inside of a non privileged container under OpenShift/Kubernetes is not going to work. The problem is podman/buildah needs at least CAP_SETUID, CAP_SETGID in order to build a container.
Running podman build from inside of a non privileged container under OpenShift/Kubernetes is not going to work. The problem is podman/buildah needs at least CAP_SETUID, CAP_SETGID in order to build a container.
Hi Im using jenkins pod template to create yaml file and trigger podman inside the container to build docker image like this.same template works fine on other openshift clusters deployed on different cloud providers AWS/Azure.But for IBM Openshift im getting error.
Error: error building at STEP "WORKDIR /": error ensuring container path "/": stat /var/lib/containers/storage/overlay/d0481c59b41522826c1f9ca8a4843adb6d7cf2905bb53bf321c23aed72d93bd3/merged: invalid argument
containerTemplate(name: 'podman', image: 'us.icr.io/mywizdev/podman:v2.0', privileged: true, ttyEnabled: true, command: 'cat')]
image: "us.icr.io/mywizdev/podman:v2.0" imagePullPolicy: "IfNotPresent" name: "podman" resources: limits: {} requests: {} securityContext: privileged: true tty: true volumeMounts:
Ok so this is a privileged pod. Why this is only blowing up only on IBM Cloud, I have no idea.
@support I experimented with this a bit and have a couple observations. I ran podman in an alpine pod - installing pieces till I got it to work (podman and crun). Other than than, I think I am running in an equivalent container - privileged, same hostPath mount, same configuration files as they provided. This was my first attempt at running podman
in a container. I have used it on my Fedora workstation. So I am hardly "expert" at this.
One obvious thing to look at is how the image they are running podman
is built. Do they have a Dockerfile for their us.icr.io/mywizdev/podman:v2.0
image?
In my experiment I had to modify the /etc/containers/storage.conf
that was provided in the issue. Some subtables were missing:
mount_program
and mountopt
options should be in a [storage.options]
or [storage.options.overlay]
subtable. I had to make that change to get this to work at all.additionalimagestores
should be in a [storage.options]
subtable. I had to create additional directories and lock files under /var/lib/shared
which they might already have.My final /etc/containers/storage.conf
looked like this:
[storage]
driver = "overlay"
runroot = "/var/run/containers/storage"
graphroot = "/var/lib/containers/storage"
[storage.options]
additionalimagestores = [
"/var/lib/shared",
]
[storage.options.overlay]
mount_program="/usr/bin/fuse-overlayfs"
mountopt = "nodev,fsync=0"
I was able to produce an error similar to the error they saw, but that was a result of my experiments - at one point I had tried driver = "overlay2"
rather than overlay
and I was able to build the image. When I switched back to overlay
I got that error. That appears to have been the result of pulling the base image using one driver then trying to build my image using a different driver - podman was looking for image layers based on the current storage driver. I deleted all the local images and then I was able to build successfully.
Some web searches also suggested issues with the storage driver configuration as likely to cause issues like this.
Hi JMCMEEK, While you say delete the images.you mean to remove the images using podman rmi command?
Because I removed the local images using podman rmi and then removed the folders inside /var/lib/containers/storage/overlay and nulled the file inside /var/lib/containers/storage/libpod/bolt_state.db even doing all these things it fails during image build usimg podman.
One obvious thing to look at is how the image they are running podman
is built. Do they have a Dockerfile for their us.icr.io/mywizdev/podman:v2.0
image?
Yes we do have a docker file using below contents we built as we did some minor modifications.
FROM quay.io/containers/podman:latest
RUN yum update -y RUN yum install skopeo -y
RUN curl -fSL "https://github.com/genuinetools/reg/releases/download/v0.16.1/reg-linux-amd64" -o "/usr/local/bin/reg" \ && echo "${REG_SHA256} /usr/local/bin/reg" | sha256sum -c - \ && chmod a+x "/usr/local/bin/reg"
RUN curl -fSL "https://github.com/optiopay/klar/releases/download/v2.4.0/klar-2.4.0-linux-amd64" -o "/usr/local/bin/klar" \ && chmod a+x "/usr/local/bin/klar"
RUN echo $'[[registry]]\n\ location = "xxx.xxx.xxx.xxx:8223"\n\ insecure = true\n\ blocked = false\n\ mirror-by-digest-only = false\n\ prefix = ""\n'\
/etc/containers/registries.conf
Contents of storage.conf in podman version im running. Tried modifying the driver from overlay to overlay2 still it fails to build image.Showing invalid argument error.
[storage] driver = "overlay" runroot = "/var/run/containers/storage" graphroot = "/var/lib/containers/storage"
[storage.options] additionalimagestores = [ "/var/lib/shared", ]
[storage.options.overlay] mount_program="/usr/bin/fuse-overlayfs" mountopt = "nodev,fsync=0"
Regards,
Bhushan
@rhatdan I can reproduce a similar error trying to build in a pod that uses the current podman:latest image. No hostPath mounts, etc.
STEP 1: FROM java:openjdk-8-jre-alpine
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf"
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/shortnames.conf"
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]@fdc893b19a147681ee764b2edab6c494d60fe99d83b14b8794bbcbc040ec7aa7"
DEBU[0000] exporting opaque data as blob "sha256:fdc893b19a147681ee764b2edab6c494d60fe99d83b14b8794bbcbc040ec7aa7"
DEBU[0000] overlay: mount_data=nodev,fsync=0,lowerdir=/var/lib/containers/storage/overlay/l/NLKPKSHDPUJT2UZPJUSRKLEJU6:/var/lib/containers/storage/overlay/l/Z7WCLF3F237KPQLAYCOPSSW5KT:/var/lib/containers/storage/overlay/l/CWVNTQZJUMUZ6XSBLDXN7F4CB2,upperdir=/var/lib/containers/storage/overlay/3a64f3d763dde26e68e0d0354b2da5febe6011e5350b2fdb010d5bfd1258b55d/diff,workdir=/var/lib/containers/storage/overlay/3a64f3d763dde26e68e0d0354b2da5febe6011e5350b2fdb010d5bfd1258b55d/work
DEBU[0000] Container ID: 0846b89bf488fb6e2e148c873c11b0abfb128ecb881300a667217b8bdf879125
DEBU[0000] Parsed Step: {Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/java-1.8-openjdk/jre/bin:/usr/lib/jvm/java-1.8-openjdk/bin LANG=C.UTF-8 JAVA_HOME=/usr/lib/jvm/java-1.8-openjdk/jre JAVA_VERSION=8u111 JAVA_ALPINE_VERSION=8.111.14-r0] Command:workdir Args:[/] Flags:[] Attrs:map[] Message:WORKDIR / Original:WORKDIR /}
STEP 2: WORKDIR /
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]@fdc893b19a147681ee764b2edab6c494d60fe99d83b14b8794bbcbc040ec7aa7"
DEBU[0000] exporting opaque data as blob "sha256:fdc893b19a147681ee764b2edab6c494d60fe99d83b14b8794bbcbc040ec7aa7"
DEBU[0000] error changing to intended-new-root directory "/var/lib/containers/storage/overlay/3a64f3d763dde26e68e0d0354b2da5febe6011e5350b2fdb010d5bfd1258b55d/merged": error changing to intended-new-root directory "/var/lib/containers/storage/overlay/3a64f3d763dde26e68e0d0354b2da5febe6011e5350b2fdb010d5bfd1258b55d/merged": chdir /var/lib/containers/storage/overlay/3a64f3d763dde26e68e0d0354b2da5febe6011e5350b2fdb010d5bfd1258b55d/merged: invalid argument
DEBU[0000] error building at step {Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/java-1.8-openjdk/jre/bin:/usr/lib/jvm/java-1.8-openjdk/bin LANG=C.UTF-8 JAVA_HOME=/usr/lib/jvm/java-1.8-openjdk/jre JAVA_VERSION=8u111 JAVA_ALPINE_VERSION=8.111.14-r0] Command:workdir Args:[/] Flags:[] Attrs:map[] Message:WORKDIR / Original:WORKDIR /}: error decoding response: EOF
Error: error building at STEP "WORKDIR /": error decoding response: EOF
I also get that with the v2.1.1 image.
I tried comparing that to a handbuilt podman image - alpine + crun + podmon + selected /etc/containers config files. That works fine. I didn't see anything glaringly obvious wrt to version of various components. The pod specs differ only in the image.
Maybe some interaction between the base image (fedora ?) and how nodes are configured in IBM Cloud. If you've got ideas on what to look for, I can look.
@nalind @giuseppe Ideas?
The compute nodes in IBM Cloud are running RHEL 7 in VSIs. Maybe something about that.
the code generating that error was already changed in buildah (August 2020): https://github.com/containers/buildah/commit/3835460c3ba74a3e664229b519306d3a596d0b3c?branch=3835460c3ba74a3e664229b519306d3a596d0b3c&diff=unified#diff-f2e4566c6b7e38384283187aba6d7fd91e5ba8da2ffd0f849277bb76bff27fb3L1377-L1379
My first suggestion is to try with an updated Podman/Buildah
@giuseppe My attempt at recreating this issue is using the current quay.io/podman/stable:latest
. The original error reported by @gbhushan86 was definitely older. Perhaps that why I see a different error.
podman.yaml:
apiVersion: v1
kind: Pod
metadata:
labels:
app: podman
name: podman
spec:
automountServiceAccountToken: false
containers:
- image: quay.io/containers/podman:latest
imagePullPolicy: "Always"
name: podman
command: ['sh', '-c', 'while true; do sleep 100000; done']
securityContext:
privileged: true
Test Dockerfile (the same he used):
FROM java:openjdk-8-jre-alpine
WORKDIR /
#COPY ./target/spring-petclinic-2.2.0.BUILD-SNAPSHOT.jar ./app.jar
RUN ls -ltra
EXPOSE 8080
CMD java -jar app.jar
Recreate:
$ oc create -f podman.yaml
pod/podman created
$ oc exec -it podman -- bash
[root@podman /]# mkdir project
[root@podman /]# cd project
[root@podman project]# vi Dockerfile -- create file with content shown above
[root@podman project]# podman build -t sample .
STEP 1: FROM java:openjdk-8-jre-alpine
Completed short name "java" with unqualified-search registries (origin: /etc/containers/registries.conf)
Getting image source signatures
Copying blob cd134db5e982 done
Copying blob 709515475419 done
Copying blob 38a1c0aaa6fd done
Copying config fdc893b19a done
Writing manifest to image destination
Storing signatures
STEP 2: WORKDIR /
Error: error building at STEP "WORKDIR /": error decoding response: EOF
podman info:
[root@podman project]# podman info
host:
arch: amd64
buildahVersion: 1.18.0
cgroupManager: cgroupfs
cgroupVersion: v1
conmon:
package: conmon-2.0.21-3.fc33.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.0.21, commit: 0f53fb68333bdead5fe4dc5175703e22cf9882ab'
cpus: 4
distribution:
distribution: fedora
version: "33"
eventLogger: file
hostname: podman
idMappings:
gidmap: null
uidmap: null
kernel: 3.10.0-1160.11.1.el7.x86_64
linkmode: dynamic
memFree: 3256815616
memTotal: 16651128832
ociRuntime:
name: crun
package: crun-0.16-3.fc33.x86_64
path: /usr/bin/crun
version: |-
crun version 0.16
commit: eb0145e5ad4d8207e84a327248af76663d4e50dd
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
remoteSocket:
path: /run/podman/podman.sock
rootless: false
slirp4netns:
executable: ""
package: ""
version: ""
swapFree: 0
swapTotal: 0
uptime: 65h 13m 8.6s (Approximately 2.71 days)
registries:
search:
- registry.fedoraproject.org
- registry.access.redhat.com
- registry.centos.org
- docker.io
store:
configFile: /etc/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: overlay
graphOptions:
overlay.imagestore: /var/lib/shared
overlay.mount_program:
Executable: /usr/bin/fuse-overlayfs
Package: fuse-overlayfs-1.3.0-1.fc33.x86_64
Version: |-
fusermount3 version: 3.9.3
fuse-overlayfs: version 1.3
FUSE library version 3.9.3
using FUSE kernel interface version 7.31
overlay.mountopt: nodev,fsync=0
graphRoot: /var/lib/containers/storage
graphStatus:
Backing Filesystem: overlayfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "false"
imageStore:
number: 1
runRoot: /var/run/containers/storage
volumePath: /var/lib/containers/storage/volumes
version:
APIVersion: 2.1.0
Built: 1607438270
BuiltTime: Tue Dec 8 14:37:50 2020
GitCommit: ""
GoVersion: go1.15.5
OsArch: linux/amd64
Version: 2.2.1
When I ran that with log-level=debug I saw:
DEBU[0000] error changing to intended-new-root directory "/var/lib/containers/storage/overlay/3a64f3d763dde26e68e0d0354b2da5febe6011e5350b2fdb010d5bfd1258b55d/merged": error changing to intended-new-root directory "/var/lib/containers/storage/overlay/3a64f3d763dde26e68e0d0354b2da5febe6011e5350b2fdb010d5bfd1258b55d/merged": chdir /var/lib/containers/storage/overlay/3a64f3d763dde26e68e0d0354b2da5febe6011e5350b2fdb010d5bfd1258b55d/merged: invalid argument
DEBU[0000] error building at step {Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/java-1.8-openjdk/jre/bin:/usr/lib/jvm/java-1.8-openjdk/bin LANG=C.UTF-8 JAVA_HOME=/usr/lib/jvm/java-1.8-openjdk/jre JAVA_VERSION=8u111 JAVA_ALPINE_VERSION=8.111.14-r0] Command:workdir Args:[/] Flags:[] Attrs:map[] Message:WORKDIR / Original:WORKDIR /}: error decoding response: EOF
which suggests the merged
directory was not created or no longer exists.
Tried some experiments with buildah
- quay.io/buildah/stable:latest. These were based on what I think the tektoncd buildah task creates - and which works.
The following tests use this pod spec:
apiVersion: v1
kind: Pod
metadata:
labels:
app: buildah
name: buildah
spec:
automountServiceAccountToken: false
containers:
- image: quay.io/buildah/stable:latest
imagePullPolicy: "Always"
name: buildah
command: ['sh', '-c', 'while true; do sleep 100000; done']
securityContext:
privileged: true
oc create pod -f buildah.yaml
oc exec -it buildah -- sh
buildah bud -t sample .
Output:
sh-5.0# buildah bud -t sample .
STEP 1: FROM java:openjdk-8-jre-alpine
Completed short name "java" with unqualified-search registries (origin: /etc/containers/registries.conf)
Getting image source signatures
Copying blob cd134db5e982 done
Copying blob 709515475419 done
Copying blob 38a1c0aaa6fd done
Copying config fdc893b19a done
Writing manifest to image destination
Storing signatures
STEP 2: WORKDIR /
error building at STEP "WORKDIR /": error decoding response: EOF
Debug output showed the same error wrt to /merged
directory:
STEP 2: WORKDIR /
DEBU error changing to intended-new-root directory "/var/lib/containers/storage/overlay/b086b8ce5b041f24f402dba02514d90bd3072f09ff391edd181810b63e228ff8/merged": error changing to intended-new-root directory "/var/lib/containers/storage/overlay/b086b8ce5b041f24f402dba02514d90bd3072f09ff391edd181810b63e228ff8/merged": chdir /var/lib/containers/storage/overlay/b086b8ce5b041f24f402dba02514d90bd3072f09ff391edd181810b63e228ff8/merged: invalid argument
oc create pod -f buildah.yaml
oc exec -it buildah -- sh
buildah --storage-driver=overlay bud -t sample .
Output:
sh-5.0# buildah --storage-driver=overlay bud -t sample .
'overlay' is not supported over overlayfs, a mount_program is required: backing file system is unsupported for this graph driver
WARN failed to shutdown storage: "'overlay' is not supported over overlayfs, a mount_program is required: backing file system is unsupported for this graph driver"
Getting closer to the tekton-cd task, I added an empty dir volume mount for /var/lib/containers
:
apiVersion: v1
kind: Pod
metadata:
labels:
app: buildah
name: buildah
spec:
automountServiceAccountToken: false
containers:
- image: quay.io/buildah/stable:latest
imagePullPolicy: "Always"
name: buildah
command: ['sh', '-c', 'while true; do sleep 100000; done']
securityContext:
privileged: true
volumeMounts:
- name: varlibcontainers
mountPath: /var/lib/containers
volumes:
- name: varlibcontainers
emptyDir: {}
oc create pod -f buildah.yaml
oc exec -it buildah -- sh
buildah bud -t sample .
Output:
sh-5.0# buildah bud -t sample .
STEP 1: FROM java:openjdk-8-jre-alpine
Completed short name "java" with unqualified-search registries (origin: /etc/containers/registries.conf)
Getting image source signatures
Copying blob 709515475419 done
Copying blob 38a1c0aaa6fd done
Copying blob cd134db5e982 done
Copying config fdc893b19a done
Writing manifest to image destination
Storing signatures
STEP 2: WORKDIR /
error building at STEP "WORKDIR /": error decoding response: EOF
Debug output looks like Test 1
oc create pod -f buildah.yaml
oc exec -it buildah -- sh
buildah --storage-driver=overlay bud -t sample .
Output:
STEP 1: FROM java:openjdk-8-jre-alpine
STEP 2: WORKDIR /
STEP 3: RUN ls -ltra
total 76
drwxrwxrwt 2 root root 4096 Mar 3 2017 tmp
...
STEP 4: EXPOSE 8080
STEP 5: CMD java -jar app.jar
STEP 6: COMMIT sample
Getting image source signatures
Copying blob 9f8566ee5135 skipped: already exists
Copying blob 78075328e0da skipped: already exists
Copying blob 20dd87a4c2ab skipped: already exists
Copying blob 8196a9d4acbb done
Copying config 41ec2dcbb0 done
Writing manifest to image destination
Storing signatures
--> 41ec2dcbb0d
41ec2dcbb0dbbfeab220737e623c3069e6f1f9128f5cded04adc087017ab8c8d
Success with buildah - to build in a root container you must have a volume mounted and you must specify the storage driver.
Back to podman...
I was able to make podman work !
Podman.yaml:
apiVersion: v1
kind: Pod
metadata:
labels:
app: podman
name: podman
spec:
automountServiceAccountToken: false
containers:
- image: quay.io/podman/stable:latest
imagePullPolicy: "Always"
name: podman
command: ['sh', '-c', 'while true; do sleep 100000; done']
securityContext:
privileged: true
volumeMounts:
- mountPath: "/var/lib/containers/storage"
name: "varlibcontainers"
volumes:
- name: varlibcontainers
emptyDir: {}
oc create pod -f podman.yaml
oc exec -it podman -- sh
podman --storage-driver=overlay build -t sample .
Observations and questions:
The mount path had to be at least as deep as /var/lib/containers/storage
- /var/lib/containers/
failed with an error related to some bolt db. To share this with other build steps is /var/lib/containers/storage
correct?
Is there any need/reason to mount a volume for /var/run/containers/storage
? I saw that @gbhushan86 had been doing that. I don't see that in pipeline tasks using buildah.
Why is --storage-driver=overlay
needed? It seems to me it should have picked up that up from storage.conf:
sh-5.0# cat /etc/containers/storage.conf | grep -v -E "^#|^$"
[storage]
driver = "overlay"
runroot = "/var/run/containers/storage"
graphroot = "/var/lib/containers/storage"
[storage.options]
additionalimagestores = [
"/var/lib/shared",
]
[storage.options.overlay]
mount_program = "/usr/bin/fuse-overlayfs"
mountopt = "nodev,fsync=0"
[storage.options.thinpool]
This did not work either:
STORAGE_DRIVER=overlay podman build -t sample .
Is there a bug in how the config or env var overrides are passed to buildah
from podman
?
Do the storage options - mount_program and mountopt - need to be respecified? Are they needed at all? The man page for podman says:
--storage-driver=value
Storage driver. The default storage driver for UID 0 is configured in /etc/containers/storage.conf ($HOME/.config/contain‐
ers/storage.conf in rootless mode), and is vfs for non-root users when fuse-overlayfs is not available. The STORAGE_DRIVER
environment variable overrides the default. The --storage-driver specified driver overrides all.
Overriding this option will cause the storage-opt settings in /etc/containers/storage.conf to be ignored. The user must
specify additional options via the --storage-opt flag.
My quick attempts at trying to specify --storage-opt
broke this so I hope they are not needed.
Finally... @gbhushan86 had this working on AWS/azure without my changes (no --storage-driver overlay
). Does that point to a difference between AWS/Azure and IBM Cloud we need to sort out?
At any rate, it seems the podman image can work running in a privileged pod with a volume mount and adding --storage-driver=overlay
to the podman build
command.
--storage-driver=overlay should not be required if the driver is configured properly within the buildah/podman image storage.conf.
I was using /etc/containers/storage.conf unmodified from the currentr quay.io/podman/stable:latest image. I'm repeating that here with comments and blank lines stripped out:
sh-5.0# cat /etc/containers/storage.conf | grep -v -E "^#|^$"
[storage]
driver = "overlay"
runroot = "/var/run/containers/storage"
graphroot = "/var/lib/containers/storage"
[storage.options]
additionalimagestores = [
"/var/lib/shared",
]
[storage.options.overlay]
mount_program = "/usr/bin/fuse-overlayfs"
mountopt = "nodev,fsync=0"
[storage.options.thinpool]
I've built an alpine-based podman pod with the same config, same versions of podman - and that works.
Its seems to me the two variables are the fedora based image and possibly running on IBM Cloud (I have nothing else to try this on).
I think I got it working, but I'm not sure its all correct. Don't understand why it works.
I removed the [storage.options.overlay]
stanza from /etc/containers/storage.conf
.
[root@podman project]# cat /etc/containers/storage.conf
[storage]
driver = "overlay"
runroot = "/var/run/containers/storage"
graphroot = "/var/lib/containers/storage"
[storage.options]
additionalimagestores = [
]
Run podman build -t sample .
[root@podman project]# podman build -t sample .
STEP 1: FROM java:openjdk-8-jre-alpine
Completed short name "java" with unqualified-search registries (origin: /etc/containers/registries.conf)
Getting image source signatures
Copying blob cd134db5e982 done
Copying blob 38a1c0aaa6fd done
Copying blob 709515475419 done
Copying config fdc893b19a done
Writing manifest to image destination
Storing signatures
STEP 2: WORKDIR /
--> ea3b7030c6d
blah blah
STEP 6: COMMIT sample
--> e50bbd6910f
e50bbd6910f8ce79f5f7a133b80e9698059860e0d5e8614886bb10cd7dfc2a0c
Ran with debug logging. I think it shows it is using overlay
.
podman_build.log
$ grep overlay podman_build.log
DEBU[0000] Using graph driver overlay
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] cached value indicated that overlay is supported
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]@fdc893b19a147681ee764b2edab6c494d60fe99d83b14b8794bbcbc040ec7aa7"
DEBU[0000] overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/G4VF3XD2NNWINWALJ64GGSQCMU:/var/lib/containers/storage/overlay/l/EKACG7TSYWVHCBCSFZ7HXJK4LZ:/var/lib/containers/storage/overlay/l/FDNVYVEFMYLDE7XJ2EFC7MTBE2,upperdir=/var/lib/containers/storage/overlay/83be2376c22e98cff202e3a8fc4e0f198fed3c878098c0a14ba346e79152e118/diff,workdir=/var/lib/containers/storage/overlay/83be2376c22e98cff202e3a8fc4e0f198fed3c878098c0a14ba346e79152e118/work
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]@fdc893b19a147681ee764b2edab6c494d60fe99d83b14b8794bbcbc040ec7aa7"
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]docker.io/library/85e7e39f64598d967dffcfb96137a7c3eb329a9e80bfcdd1bf1e62d4eff17910-tmp:latest"
DEBU[0000] committing image with reference "containers-storage:[overlay@/var/lib/containers/storage+/var/run/containers/storage]docker.io/library/85e7e39f64598d967dffcfb96137a7c3eb329a9e80bfcdd1bf1e62d4eff17910-tmp:latest" is allowed by policy
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]@fdc893b19a147681ee764b2edab6c494d60fe99d83b14b8794bbcbc040ec7aa7"
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]@03449ba70d0862b33d1b2c443d49a088cd387d3f0bfb9da71c559ea155c85c63"
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]@03449ba70d0862b33d1b2c443d49a088cd387d3f0bfb9da71c559ea155c85c63"
DEBU[0000] overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/G4VF3XD2NNWINWALJ64GGSQCMU:/var/lib/containers/storage/overlay/l/EKACG7TSYWVHCBCSFZ7HXJK4LZ:/var/lib/containers/storage/overlay/l/FDNVYVEFMYLDE7XJ2EFC7MTBE2,upperdir=/var/lib/containers/storage/overlay/7c4fa9b7cf99b8c42de8354d9f70fa7b04ab560b4258e3d10695350f3e14f13e/diff,workdir=/var/lib/containers/storage/overlay/7c4fa9b7cf99b8c42de8354d9f70fa7b04ab560b4258e3d10695350f3e14f13e/work
DEBU[0000] bind mounted "/var/lib/containers/storage/overlay/7c4fa9b7cf99b8c42de8354d9f70fa7b04ab560b4258e3d10695350f3e14f13e/merged" to "/var/tmp/buildah767148077/mnt/rootfs"
DEBU[0000] bind mounted "/var/lib/containers/storage/overlay-containers/2b1bf7b2c03cbe19199be1ec1564a26e31f286c1a98eba2318098d3a2cf0adb1/userdata/run/secrets" to "/var/tmp/buildah767148077/mnt/buildah-bind-target-0"
Obviously I am not specifying mountopts
. Does that matter?
And the bigger questions: Why does it work? Is the solution cloud platform dependent (though, again, alpine behaved the way I expected)?
mountopts should not matter. Removing them is fine. Some of the mountoptions require kernels of a certain level. For example metacopy=on
This will use native overlay, which if it works is great. Sometimes we have issues with it, and use fuse-overlayfs for rootless containers.
@rhatdan For my own education and understanding... Can you briefly explain or point to something that might describe to what extent "native overlay" and the fuse-overlayfs are supplied by the host operating system or the container image - or a bit of both? I understand that in general the host provides the kernel and the container provides user-space code. I'm trying get my head around why the fedora based podman image behaves differently than the alpine based image I created (which would point to container differences) and why the fedora based image apparently behaves differently in AWS than in IBM Cloud (which suggests host kernel differences). It seems like this particular exercise must be sensitive to both.
I didn't open the issue, but as a member of the IBM Cloud development community I have an interest in understanding if IBM Cloud is doing something "wrong" or this is an area where we and our customers should expect differences.
Then its probably time for me to bow out and let the original author say whether their concerns have been addressed.
Thanks in advance - and for your efforts so far.
Well first off the images and the file systems are unrelated. Podman supports two types of "overlay" file systems. Native or kernel support Overlay is only available for rootful Podman. The reason for this, is the kernel prevents non-root users from being allowed to mount an kernel overlay file system, even if the user is in a user namespace. (There has been some ongoing investigations to see if this feature could be loosened in the future). Currently rootless mounting is only allowed while in a user namespace for procfs, sysfs, tmpfs, fuse, bind file systems. (I believe this is still correct). @giuseppe of my team created fuse-overlayfs mimicking what the kernel does in "overlay" to allow rootless mode to mount a fuse overlay file system. The fuse-overlayfs file system has also grown some other cool features, that sometimes make it interesting to use even in rootful mode.
I have no idea why the images behave differently. I don't believe this has anything to do with IBM Cloud. The Cloud should not effect the node OS in any ways.
The reason for this, is the kernel prevents non-root users from being allowed to mount an kernel overlay file system, even if the user is in a user namespace. (There has been some ongoing investigations to see if this feature could be loosened in the future).
I believe this is expected in 5.11.
I think I got it working, but I'm not sure its all correct. Don't understand why it works.
I removed the
[storage.options.overlay]
stanza from/etc/containers/storage.conf
.[root@podman project]# cat /etc/containers/storage.conf [storage] driver = "overlay" runroot = "/var/run/containers/storage" graphroot = "/var/lib/containers/storage" [storage.options] additionalimagestores = [ ]
Run
podman build -t sample .
[root@podman project]# podman build -t sample . STEP 1: FROM java:openjdk-8-jre-alpine Completed short name "java" with unqualified-search registries (origin: /etc/containers/registries.conf) Getting image source signatures Copying blob cd134db5e982 done Copying blob 38a1c0aaa6fd done Copying blob 709515475419 done Copying config fdc893b19a done Writing manifest to image destination Storing signatures STEP 2: WORKDIR / --> ea3b7030c6d blah blah STEP 6: COMMIT sample --> e50bbd6910f e50bbd6910f8ce79f5f7a133b80e9698059860e0d5e8614886bb10cd7dfc2a0c
Ran with debug logging. I think it shows it is using
overlay
. podman_build.log$ grep overlay podman_build.log DEBU[0000] Using graph driver overlay DEBU[0000] [graphdriver] trying provided driver "overlay" DEBU[0000] cached value indicated that overlay is supported DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]@fdc893b19a147681ee764b2edab6c494d60fe99d83b14b8794bbcbc040ec7aa7" DEBU[0000] overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/G4VF3XD2NNWINWALJ64GGSQCMU:/var/lib/containers/storage/overlay/l/EKACG7TSYWVHCBCSFZ7HXJK4LZ:/var/lib/containers/storage/overlay/l/FDNVYVEFMYLDE7XJ2EFC7MTBE2,upperdir=/var/lib/containers/storage/overlay/83be2376c22e98cff202e3a8fc4e0f198fed3c878098c0a14ba346e79152e118/diff,workdir=/var/lib/containers/storage/overlay/83be2376c22e98cff202e3a8fc4e0f198fed3c878098c0a14ba346e79152e118/work DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]@fdc893b19a147681ee764b2edab6c494d60fe99d83b14b8794bbcbc040ec7aa7" DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]docker.io/library/85e7e39f64598d967dffcfb96137a7c3eb329a9e80bfcdd1bf1e62d4eff17910-tmp:latest" DEBU[0000] committing image with reference "containers-storage:[overlay@/var/lib/containers/storage+/var/run/containers/storage]docker.io/library/85e7e39f64598d967dffcfb96137a7c3eb329a9e80bfcdd1bf1e62d4eff17910-tmp:latest" is allowed by policy DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]@fdc893b19a147681ee764b2edab6c494d60fe99d83b14b8794bbcbc040ec7aa7" DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]@03449ba70d0862b33d1b2c443d49a088cd387d3f0bfb9da71c559ea155c85c63" DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]@03449ba70d0862b33d1b2c443d49a088cd387d3f0bfb9da71c559ea155c85c63" DEBU[0000] overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/G4VF3XD2NNWINWALJ64GGSQCMU:/var/lib/containers/storage/overlay/l/EKACG7TSYWVHCBCSFZ7HXJK4LZ:/var/lib/containers/storage/overlay/l/FDNVYVEFMYLDE7XJ2EFC7MTBE2,upperdir=/var/lib/containers/storage/overlay/7c4fa9b7cf99b8c42de8354d9f70fa7b04ab560b4258e3d10695350f3e14f13e/diff,workdir=/var/lib/containers/storage/overlay/7c4fa9b7cf99b8c42de8354d9f70fa7b04ab560b4258e3d10695350f3e14f13e/work DEBU[0000] bind mounted "/var/lib/containers/storage/overlay/7c4fa9b7cf99b8c42de8354d9f70fa7b04ab560b4258e3d10695350f3e14f13e/merged" to "/var/tmp/buildah767148077/mnt/rootfs" DEBU[0000] bind mounted "/var/lib/containers/storage/overlay-containers/2b1bf7b2c03cbe19199be1ec1564a26e31f286c1a98eba2318098d3a2cf0adb1/userdata/run/secrets" to "/var/tmp/buildah767148077/mnt/buildah-bind-target-0"
Obviously I am not specifying
mountopts
. Does that matter?And the bigger questions: Why does it work? Is the solution cloud platform dependent (though, again, alpine behaved the way I expected)?
Hi JMCMEEK, The podman that you made it work is it built on top of alpine or fedora..?
This just means you are using the builtin defaults. There was something in your old storage.conf that was causing a failure.
@gbhushan86 My last experiments were using quay.io/podman/stable:latest
. That should be the same as quay.io/containers/podman:latest
. I found they (podman team) apparently build multiple images. Its looked like they should all be the same with respect to this issue.
@rhatdan Wrt to my old storage.conf... That was the storage.conf that comes with the image. What I removed was:
[storage.options.overlay]
mount_program = "/usr/bin/fuse-overlayfs"
mountopt = "nodev,fsync=0"
[storage.options.thinpool]
From what I recall of my reading the fuse-overlayfs
mount program is for rootless use. fuse-overlay may or may not work there in the context of a container. I thought the mount_program was only used for non-root users (i.e. ignored for root). Maybe I misunderstood that.
Maybe this is "user error". Beats me.
We only use fuse-overlayfs if it is specified in storage.conf OR if running rootless mode and fuse-overlayfs is installed. Once overlayfs is supported for rootless mode, we will default to overlayfs for rootless, and then only fall back to fuse-overlayfs if it does not work.
A friendly reminder that this issue had no activity for 30 days.
I don't believe there is any action item for this issue, so I am closing. Reopen if I am mistaken.
Error: error creating build container: error creating container: error creating read-write layer with ID "fe39354233f80b7a594f9cd1e7a2b 873ea2ed8d0eec64f2b3be8ebbab5be06b1": Stat /var/lib/containers/storage/overlay2/72e830a4dff5f0d5225cdc0a320e85ab1ce06ea5673acfe8d83a764 5cbd0e9cf: no such file or directory
facing this issue when changed storage driver to overlay2 and also created symlink between overlay and overlay2
(Edited to make the report a bit more md readable, no text was changed.) /kind bug Note: I tried building Docker image with latest podman image too on top of ibm openshift container...still facing the same error.which I pasted below. im using podman engine on oc4.5 to build images inside pod. recently imfacing this issue while building image using docker file. Using below version of podman to build images:
Description Podman is throwing error while trying to build image from Dockerfile getting error below.
Steps to reproduce the issue:
Describe the results you received:
Describe the results you expected:
Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:[root@mypod-ecf5e418-2c38-46ff-8bf0-38a8d5a08d61-nn2rk-nhtfr /]# podman version Version: 2.0.4 API Version: 1 Go Version: go1.14.6 Built: Thu Jan 1 00:00:00 1970 OS/Arch: linux/amd64
Output of
podman info --debug
: [root@mypod-ecf5e418-2c38-46ff-8bf0-38a8d5a08d61-nn2rk-nhtfr /]# podman info --debug host: arch: amd64 buildahVersion: 1.15.0 cgroupVersion: v1 conmon: package: conmon-2.0.19-1.fc32.x86_64 path: /usr/bin/conmon version: 'conmon version 2.0.19, commit: 5dce9767526ed27f177a8fa3f281889ad509fea7' cpus: 4 distribution: distribution: fedora version: "32" eventLogger: file hostname: mypod-ecf5e418-2c38-46ff-8bf0-38a8d5a08d61-nn2rk-nhtfr idMappings: gidmap: null uidmap: null kernel: 3.10.0-1160.6.1.el7.x86_64 linkmode: dynamic memFree: 662888448 memTotal: 16651128832 ociRuntime: name: crun package: crun-0.14.1-1.fc32.x86_64 path: /usr/bin/crun version: |- crun version 0.14.1 commit: 598ea5e192ca12d4f6378217d3ab1415efeddefa spec: 1.0.0 +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL os: linux remoteSocket: path: /run/podman/podman.sock rootless: false slirp4netns: executable: "" package: "" version: "" swapFree: 0 swapTotal: 0 uptime: 104h 9m 39.35s (Approximately 4.33 days) registries: gitlab.ethan.svc.cluster.local:8223: Blocked: false Insecure: true Location: gitlab.ethan.svc.cluster.local:8223 MirrorByDigestOnly: false Mirrors: null Prefix: gitlab.ethan.svc.cluster.local:8223 search:Package info (e.g. output of
rpm -q podman
orapt list podman
):Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?
Yes
Additional environment details (AWS, VirtualBox, physical, etc.): IBM Cloud