Open nomadme opened 2 years ago
Hi @nomadme thanks for creating the issue.
Are you running buildah
inside a container ? I think its failing while performing unshare
but first I'd suggest updating buildah
to latest version.
and could you share more details about your environment like if its a container and logs with buildah --log-level=debug <your command>
I am suspecting missing privileges on your working environment but your release is showing misleading error message. So please share details with your attempt from a latest version.
Thank you for getting back on this @flouthoc.
Yes, it is running inside a container built from: https://catalog.redhat.com/software/containers/ubi8/openjdk-11/5dd6a4b45a13461646f677f4?tag=1.10-10.1638383051
[root@2c088bb099bf tmp]# buildah --log-level=debug info
DEBU[0000] running [/usr/bin/buildah-in-a-user-namespace --log-level=debug info] with environment [_=/usr/bin/buildah BASH_FUNC_which%%=() { ( alias;
eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot "$@"
} LESSOPEN=||/usr/bin/lesspipe.sh %s JBOSS_CONTAINER_JAVA_PROXY_MODULE=/opt/jboss/container/java/proxy LD_PRELOAD=libnss_wrapper.so CHROME_BIN=/bin/chrome JBOSS_CONTAINER_UTIL_LOGGING_MODULE=/opt/jboss/container/util/logging/ PATH=/home/jboss/.local/bin:/home/jboss/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/s2i JBOSS_CONTAINER_MAVEN_S2I_MODULE=/opt/jboss/container/maven/s2i MAVEN_VERSION=3.6 S2I_SOURCE_DEPLOYMENTS_FILTER=*.jar SHLVL=1 JAVA_VENDOR=openjdk JOLOKIA_VERSION=1.6.2 NSS_WRAPPER_GROUP=/etc/group JBOSS_CONTAINER_PROMETHEUS_MODULE=/opt/jboss/container/prometheus TERM=xterm AB_JOLOKIA_AUTH_OPENSHIFT=true JBOSS_IMAGE_VERSION=1.10 JAVA_DATA_DIR=/deployments/data JBOSS_CONTAINER_JAVA_JVM_MODULE=/opt/jboss/container/java/jvm HOME=/home/jboss PWD=/tmp JAVA_VERSION=11 AB_JOLOKIA_HTTPS=true AB_PROMETHEUS_JMX_EXPORTER_CONFIG=/opt/jboss/container/prometheus/etc/jmx-exporter-config.yaml NSS_WRAPPER_PASSWD=/home/jboss/passwd container=oci JBOSS_CONTAINER_JAVA_S2I_MODULE=/opt/jboss/container/java/s2i which_declare=declare -f JBOSS_CONTAINER_JOLOKIA_MODULE=/opt/jboss/container/jolokia JBOSS_CONTAINER_OPENJDK_JDK_MODULE=/opt/jboss/container/openjdk/jdk JBOSS_CONTAINER_S2I_CORE_MODULE=/opt/jboss/container/s2i/core/ JBOSS_CONTAINER_MAVEN_36_MODULE=/opt/jboss/container/maven/36/ JAVA_HOME=/usr/lib/jvm/java-11 HOSTNAME=2c088bb099bf AB_JOLOKIA_PASSWORD_RANDOM=true LANG=C.utf8 JBOSS_CONTAINER_JAVA_RUN_MODULE=/opt/jboss/container/java/run JBOSS_CONTAINER_MAVEN_DEFAULT_MODULE=/opt/jboss/container/maven/default/ JBOSS_IMAGE_NAME=ubi8/openjdk-11 TMPDIR=/var/tmp _CONTAINERS_USERNS_CONFIGURED=1 BUILDAH_ISOLATION=rootless], UID map [{ContainerID:0 HostID:0 Size:4294967295}], and GID map [{ContainerID:0 HostID:0 Size:4294967295}]
qemu: unknown option 'log-level=debug'
ERRO[0000] error parsing PID "": strconv.Atoi: parsing "": invalid syntax
ERRO[0000] (unable to determine exit status)
Here is my local docker info: docker info
Client:
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc., v0.7.1)
compose: Docker Compose (Docker Inc., v2.2.1)
scan: Docker Scan (Docker Inc., v0.14.0)
Server:
Containers: 4
Running: 3
Paused: 0
Stopped: 1
Images: 7
Server Version: 20.10.11
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 7b11cfaabd73bb80907dd23182b9347b4245eb5d
runc version: v1.0.2-0-g52b36a2
init version: de40ad0
Security Options:
seccomp
Profile: default
cgroupns
Kernel Version: 5.10.76-linuxkit
Operating System: Docker Desktop
OSType: linux
Architecture: aarch64
CPUs: 7
Total Memory: 9.718GiB
Name: docker-desktop
ID: GHQF:7XPX:HPMX:UONT:A37R:2M7V:VCUZ:Z3MW:3GTK:T22G:SK2I:Q3RN
Docker Root Dir: /var/lib/docker
Debug Mode: false
HTTP Proxy: http.docker.internal:3128
HTTPS Proxy: http.docker.internal:3128
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
I'm running on M1Pro Chip
, if that clues in something as well.
I also ran a container from https://quay.io/repository/buildah/stable?tab=info that is also getting same error as above container I'm running.
Output of: cat /etc/*release
[root@b359094d70dc /]# cat /etc/*release
Fedora release 35 (Thirty Five)
NAME="Fedora Linux"
VERSION="35 (Container Image)"
ID=fedora
VERSION_ID=35
VERSION_CODENAME=""
PLATFORM_ID="platform:f35"
PRETTY_NAME="Fedora Linux 35 (Container Image)"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:35"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f35/system-administrators-guide/"
SUPPORT_URL="https://ask.fedoraproject.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=35
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=35
PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
VARIANT="Container Image"
VARIANT_ID=container
Fedora release 35 (Thirty Five)
Fedora release 35 (Thirty Five)
Output of: buildah --log-level=debug info
[root@b359094d70dc /]# buildah --log-level debug info
DEBU[0000] running [/usr/bin/buildah-in-a-user-namespace --log-level debug info] with environment [_=/usr/bin/buildah PATH=/root/.local/bin:/root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin DEBUGINFOD_URLS=https://debuginfod.fedoraproject.org/ SHLVL=1 TERM=xterm BUILDAH_ISOLATION=chroot FGC=f35 LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.webp=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.m4a=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.oga=01;36:*.opus=01;36:*.spx=01;36:*.xspf=01;36: LANG=C.UTF-8 HOME=/root container=oci PWD=/ DISTTAG=f35container HOSTNAME=b359094d70dc TMPDIR=/var/tmp _CONTAINERS_USERNS_CONFIGURED=1], UID map [{ContainerID:0 HostID:0 Size:4294967295}], and GID map [{ContainerID:0 HostID:0 Size:4294967295}]
qemu: unknown option 'log-level'
ERRO[0000] error parsing PID "": strconv.Atoi: parsing "": invalid syntax
ERRO[0000] (unable to determine exit status)
Hello @flouthoc , Not sure whether or not I'm running into a similar issue.
$ docker run -it quay.io/buildah/stable:latest /bin/sh
sh-5.1# buildah --log-level debug info
DEBU[0000] running [buildah-in-a-user-namespace --log-level debug info] with environment [HOSTNAME=141826c3abd4 DISTTAG=f35container PWD=/ container=oci HOME=/root FGC=f35 BUILDAH_ISOLATION=chroot TERM=xterm SHLVL=1 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin _=/usr/bin/buildah TMPDIR=/var/tmp _CONTAINERS_USERNS_CONFIGURED=1], UID map [{ContainerID:0 HostID:0 Size:4294967295}], and GID map [{ContainerID:0 HostID:0 Size:4294967295}]
Error during unshare(CLONE_NEWUSER): Operation not permitted
ERRO[0000] error parsing PID "": strconv.Atoi: parsing "": invalid syntax
ERRO[0000] (unable to determine exit status)
Does anything from that output indicate a known issue?
Try docker run --security-opt seccomp=unconfined -it quay.io/buildah/stable:latest /bin/sh
Or better yet, use Podman. :^)
Docker default seccomp.json does not allow the unshare Syscall.
Thanks @rhatdan
When I ran it as suggested, I still got the same error.
docker run --security-opt seccomp=unconfined --platform linux/amd64 -it quay.io/buildah/stable:latest /bin/sh 130 ↵ gg@GG-MBP-M1Pro
sh-5.1# buildah
qemu: no user program specified
ERRO[0000] error parsing PID "": strconv.Atoi: parsing "": invalid syntax
ERRO[0000] (unable to determine exit status)
sh-5.1# buildah info
Error while loading info: No such file or directory
ERRO[0000] error parsing PID "": strconv.Atoi: parsing "": invalid syntax
ERRO[0000] (unable to determine exit status)
here is the log:
sh-5.1# buildah --log-level debug info
DEBU[0000] running [/usr/bin/buildah-in-a-user-namespace --log-level debug info] with environment [_=/usr/bin/buildah PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin SHLVL=1 TERM=xterm BUILDAH_ISOLATION=chroot FGC=f35 HOME=/root container=oci PWD=/ DISTTAG=f35container HOSTNAME=7bc9798701e5 TMPDIR=/var/tmp _CONTAINERS_USERNS_CONFIGURED=1], UID map [{ContainerID:0 HostID:0 Size:4294967295}], and GID map [{ContainerID:0 HostID:0 Size:4294967295}]
qemu: unknown option 'log-level'
ERRO[0000] error parsing PID "": strconv.Atoi: parsing "": invalid syntax
ERRO[0000] (unable to determine exit status)
sh-5.1#
Hi @nomadme and @clydet ,
Sorry for late reply.
I tried playing with docker
and quay.io/buildah/stable:latest
inside docker
.
I think inside container buildah uses fuse-overlay
rather than host kernel overlay so we would also need to mount /dev/fuse
when running buildah inside docker.
Could you guys please try with following command. Following works for me.
sudo docker run -it --device /dev/fuse:rw --security-opt seccomp=unconfined --security-opt apparmor=unconfined quay.io/buildah/stable:latest /bin/sh
Output from my terminal
flouthoc@flouthoc-pc:~$ sudo docker run -it --device /dev/fuse:rw --security-opt seccomp=unconfined --security-opt apparmor=unconfined quay.io/buildah/stable:latest /bin/shsh-5.1# buildah images
REPOSITORY TAG IMAGE ID CREATED SIZE
sh-5.1# vi Dockerfile
sh-5.1# buildah build -t test .
STEP 1/2: FROM alpine
Resolved "alpine" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull docker.io/library/alpine:latest...
Getting image source signatures
Copying blob 59bf1c3509f3 done
Copying config c059bfaa84 done
Writing manifest to image destination
Storing signatures
STEP 2/2: RUN echo hello
hello
COMMIT test
Getting image source signatures
Copying blob 8d3ac3489996 skipped: already exists
Copying blob 274a6abcd2da done
Copying config 00566efaba done
@flouthoc, You're a stud. That works for me locally. We're looking to use buildah in containers running in AWS' ECS via Fargate. According to AWS docs regarding devices:
Note If you are using tasks that use the Fargate launch type, the devices parameter is not supported.
Any idea how we might work around an environment that doesn't allow us access to host devices?
Thanks @flouthoc
For me, it still doesn't work. I'm on M1Pro chip.
sudo docker run -it --device /dev/fuse:rw --security-opt seccomp=unconfined --security-opt apparmor=unconfined quay.io/buildah/stable:latest /bin/sh 1 ↵ gg@GG-MBP-M1Pro
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
sh-5.1# buildah images
Error while loading images: No such file or directory
ERRO[0000] error parsing PID "": strconv.Atoi: parsing "": invalid syntax
ERRO[0000] (unable to determine exit status)
sh-5.1# vi Dockerfile
sh-5.1# buildah build -t test .
Error while loading build: No such file or directory
ERRO[0000] error parsing PID "": strconv.Atoi: parsing "": invalid syntax
ERRO[0000] (unable to determine exit status)
Here is the other try with M1 chip specific run.
sudo docker run -it --device /dev/fuse:rw --security-opt seccomp=unconfined --security-opt apparmor=unconfined --platform linux/amd64 quay.io/buildah/stable:latest /bin/sh
sh-5.1# buildah images
Error while loading images: No such file or directory
ERRO[0000] error parsing PID "": strconv.Atoi: parsing "": invalid syntax
ERRO[0000] (unable to determine exit status)
sh-5.1# vi
sh-5.1# vi Dockerfile
sh-5.1# buildah build -t test .
Error while loading build: No such file or directory
ERRO[0000] error parsing PID "": strconv.Atoi: parsing "": invalid syntax
ERRO[0000] (unable to determine exit status)
@flouthoc, I Googled around a bit and saw this, which states:
Podman can use native overlay file system with the Linux kernel versions 5.13. Up until now, we have been using fuse-overlayfs. The kernel gained rootless support in the 5.11 kernel, but a bug prevented SELinux use with the file system; this bug was fixed in 5.13.
It looks like Fedora is backporting the fix into its 5.12 kernels, so users should be able to use it once they get access to the kernel.
I haven't found a fedora container with a kernel later than 5.10.47-linuxkit
. Guessing for our use case we'll need to move away from Fargate as AWS docs seem to indicate support for host device exposure when favoring EC2 based services. 🤞
Do you think that there's any use in searching about for any RHEL derivative containers with a 5.13 kernel? If so please point me in the right direction.
@clydet
I think that would need a host kernel change. Afaik fargate
locks you with a kernel so I am not sure if you could change it. Could you confirm if host kernel can be changed on ECS with fargate
from AWS support. I think not.
I'd like to recommend using ECS with EC2
that should easily allow you use --device
or change the kernel via using custom AMI
whatever you like.
@nomadme could you please try with --platform linux/arm64/v8
or --platform linux/arm
i think docker desktop is doing something with emulation not sure.
You can probably use the "vfs" driver.
A friendly reminder that this issue had no activity for 30 days.
I ran into same issue when using AWS ECS with Fargate and it uses vfs as default storage layer and if I use flag --storage-driver vfs still the same result.
buildah --storage-driver vfs bud -t myimage:1.0 -f . Error during unshare(CLONE_NEWUSER): Operation not permitted time="2022-03-11T17:40:04Z" level=error msg="error parsing PID \"\": strconv.Atoi: parsing \"\": invalid syntax" time="2022-03-11T17:40:04Z" level=error msg="(unable to determine exit status)"
AWS Fargate version we use is 1.4.0 and it seems to be using containerd as container runtime. Use of EC2 instead of Fargate is an option but AWS folks seems to have managed it with kaniko: https://aws.amazon.com/blogs/containers/building-container-images-on-amazon-ecs-on-aws-fargate/ and I was wondering if anyone managed to get around this with buildah.
This looks like you are being blocked by either SECCOMP or no CAP_SET_UID
This works on my M1:
docker run -it -e _BUILDAH_STARTED_IN_USERNS="" -e BUILDAH_ISOLATION=chroot \
--security-opt seccomp=unconfined --security-opt label:disabled \
quay.io/buildah/stable:latest /bin/sh
However it's necessary to use vfs
as the storage driver, e.g. : buildah bud --storage-driver vfs .
For overlay, it's necessary to mount the device, e.g.:
docker run -it --device /dev/fuse:rw -e _BUILDAH_STARTED_IN_USERNS="" -e BUILDAH_ISOLATION=chroot \
--security-opt seccomp=unconfined --security-opt label:disabled \
quay.io/buildah/stable:latest /bin/sh
Interesting how everyone is using app armor on a RHEL based docker image instead of label:disabled
for SELinux
A friendly reminder that this issue had no activity for 30 days.
Description
Steps to reproduce the issue: Any buildah commands failing with No such file or directory, no matter what command I use
buildah images
buildah bud -f Dockerfile -t foo:latest .
Describe the results you received:
buildah images
buildah bud -f Dockerfile -t foo:latest .
Describe the results you expected: Image is built
Output of
rpm -q buildah
orapt list buildah
:Output of
buildah version
:*Output of `cat /etc/release`:**
Output of
uname -a
:Output of
cat /etc/containers/storage.conf
: