containers / container-selinux

SELinux policy files for Container Runtimes
GNU General Public License v2.0
255 stars 91 forks source link

staff_u rootless podman #92

Closed tt-why closed 3 years ago

tt-why commented 4 years ago

Hello again, i'm stuck at trying to run rootless podman as staff_u...

First here is my conf :

uname -a
Linux desktop 5.5.11-200.fc31.x86_64 #1 SMP Mon Mar 23 17:32:43 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

dnf list installed | grep -E 'selinux-policy|container-selinux|podman'
container-selinux.noarch                    2:2.124.0-3.fc31                   @updates               
podman.x86_64                               2:1.8.2-2.fc31                     @updates               
podman-plugins.x86_64                       2:1.8.2-2.fc31                     @updates               
selinux-policy.noarch                       3.14.4-49.fc31                     @updates               
selinux-policy-devel.noarch                 3.14.4-49.fc31                     @updates               
selinux-policy-targeted.noarch              3.14.4-49.fc31                     @updates               

getenforce 
Enforcing

podman version
Version:            1.8.2
RemoteAPI Version:  1
Go Version:         go1.13.6
OS/Arch:            linux/amd64

My first issue

I first tried to do a simple podman version as a non root user confined as staff_u SELinux user. But no luck here, the command stays stuck. I'm directly thinking that i might be a SELinux problem so to validate it it did: setenforce 0 && podman version && setenforce 1, and it worked.

I know i'm running as a confined user so i tried : semanage permissive -a staff_t && podman version && semanage permissive -d staff_t , again it worked.

Great, so it might be a problem of the staff.te of the fedora-selinux/selinux-policy github project...

I enabled "debug" mode with semodule -DB and found these avc :

----
type=AVC msg=audit(03/29/2020 21:04:53.601:1049) : avc:  denied  { noatsecure } for  pid=202831 comm=bash scontext=staff_u:staff_r:staff_t:s0 tcontext=staff_u:staff_r:container_runtime_t:s0 tclass=process permissive=0 
----
type=AVC msg=audit(03/29/2020 21:04:53.601:1050) : avc:  denied  { rlimitinh } for  pid=202831 comm=podman scontext=staff_u:staff_r:staff_t:s0 tcontext=staff_u:staff_r:container_runtime_t:s0 tclass=process permissive=0 
----
type=AVC msg=audit(03/29/2020 21:04:53.601:1051) : avc:  denied  { siginh } for  pid=202831 comm=podman scontext=staff_u:staff_r:staff_t:s0 tcontext=staff_u:staff_r:container_runtime_t:s0 tclass=process permissive=0 
----
type=AVC msg=audit(03/29/2020 21:04:53.649:1052) : avc:  denied  { sys_ptrace } for  pid=1192 comm=systemd capability=sys_ptrace  scontext=staff_u:staff_r:staff_t:s0 tcontext=staff_u:staff_r:staff_t:s0 tclass=cap_userns permissive=0

So i tried to make a quick patch to allow the forbidden rules with that :

cat mypodman.te 
policy_module(mypodman, 1.0)
require {
    type container_runtime_t;
    type staff_t;
    class process { noatsecure rlimitinh siginh };
    class cap_userns sys_ptrace;
}
#============= staff_t ==============
allow staff_t container_runtime_t:process { noatsecure rlimitinh siginh };
allow staff_t self:cap_userns sys_ptrace;

semodule -i mypodman.pp && podman version : it works, awesome !

Next issue with a podman container ls

Again the command podman container ls is stuck :( semanage permissive -a staff_t && podman container ls && semanage permissive -d staff_t : it works again... But this time nothing related with ausearch -i -m avc -m user_avc

Last issue with a podman run -it fedora:31 bash

The following error occurs : {"msg":"exec container process /usr/bin/bash: Permission denied","level":"error","time":"2020-03-30T09:07:45.000757281Z"}

Here is the relevant findings:

find /home/user/.local/share/containers/ -name bash -type f
/home/user/.local/share/containers/storage/overlay/xxx/diff/usr/bin/bash

ll -Z /home/user/.local/share/containers/storage/overlay/xxx/diff/usr/bin/bash
-rwxr-xr-x. 1 user user staff_u:object_r:data_home_t:s0 1203992 Dec  6 13:08 /home/user/.local/share/containers/storage/overlay/xxx/diff/usr/bin/bash

mount | grep /home
/home type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)

type=AVC msg=audit(03/30/2020 11:07:45.756:685) : avc:  denied  { transition } for  pid=190930 comm=3 path=/usr/bin/bash dev="fuse" ino=67181813 scontext=staff_u:staff_r:container_runtime_t:s0 tcontext=system_u:system_r:container_t:s0:c433,c907 tclass=process permissive=0

sesearch -A -s container_runtime_t -t container_t -p transition
allow container_runtime_domain container_domain:process { dyntransition transition };

I can't figure out why even if the rule is present it does not apply here...

It works again when i'm doing semanage permissive -a container_runtime_t but not the perfect solution.

Thank you again for your time :)

rhatdan commented 4 years ago

I would run in permissive mode and gather all of the AVC's and then attach them. I will add these rules, that you have found.

rhatdan commented 4 years ago

@wrabcak PTAL. A lot of these should go into selinux-policy.

I would say that we should allow staff_t and User_t access to all user namespace capabilities.

$ sesearch -A -s staff_t -c cap_userns allow staff_t staff_t:cap_userns { setpcap sys_admin sys_chroot };

Currently they are very limited.

rhatdan commented 4 years ago

One issue is that your staff_t is logging in without being fully ranged.

Your process should be staff_u:staff_r:staff_t:s0-s0:c0.c1024

But I believe it is only

staff_u:staff_r:staff_t:s0

I think that is why you are getting the transition error. We need container_runitme_t to be fully ranged.

rhatdan commented 4 years ago

This rule allow staff_t container_runtime_t:process { noatsecure rlimitinh siginh }; Should not be necessary, I believe

tt-why commented 4 years ago

One issue is that your staff_t is logging in without being fully ranged.

Your process should be staff_u:staff_r:staff_t:s0-s0:c0.c1024

But I believe it is only

staff_u:staff_r:staff_t:s0

I think that is why you are getting the transition error. We need container_runitme_t to be fully ranged.

You were right ! I forgot to specify the complete range when confining my user in my kickstart...

So now i've that setup :

ps fauxZ|grep podman
staff_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 root 46406 0.0  0.0 6132 908 pts/0 S+ 16:14   0:00              \_ grep --color=auto podman
staff_u:staff_r:container_runtime_t:s0-s0:c0.c1023 user 44947 0.0  0.1 63344 33472 ? S 16:13   0:00 podman

id -Z
staff_u:staff_r:staff_t:s0-s0:c0.c1023

I've rebooted to apply the range, restorecon my home folder and i did a podman system reset. But the following simple command always fails podman run -it alpine:latest sh :

Trying to pull docker.io/library/alpine:latest...
Getting image source signatures
Copying blob aad63a933944 done  
Copying config a187dde48c done  
Writing manifest to image destination
Storing signatures
{"msg":"exec container process `/bin/sh`: Permission denied","level":"error","time":"2020-03-30T14:13:45.000319123Z"}

avc:  denied  { transition } for  pid=45145 comm=3 path=/bin/busybox dev="fuse" ino=33557878 scontext=staff_u:staff_r:container_runtime_t:s0 tcontext=system_u:system_r:container_t:s0:c313,c821 tclass=process permissive=0

Any idea ?

wrabcak commented 4 years ago

@wrabcak PTAL. A lot of these should go into selinux-policy.

I would say that we should allow staff_t and User_t access to all user namespace capabilities.

$ sesearch -A -s staff_t -c cap_userns allow staff_t staff_t:cap_userns { setpcap sys_admin sys_chroot };

Currently they are very limited.

We can allow it for all except sys_ptrace. For that boolean we have specific deny_ptrace boolean.

FYI @zpytela

rhatdan commented 4 years ago

SO the problem is staff_t:s0-s0:c0.10123 is transitioning to container_runtime_t:s0 when you execute podman, I forgot why the range is dropped...

cat > p.sh << _EOF 
#!/bin/sh
id -Z
_eof
chmod +x p.sh
chcon -t container_runtime_exec_t p.sh
./p.sh
tt-why commented 4 years ago

SO the problem is staff_t:s0-s0:c0.10123 is transitioning to container_runtime_t:s0 when you execute podman, I forgot why the range is dropped...

cat > p.sh << _EOF 
#!/bin/sh
id -Z
_eof
chmod +x p.sh
chcon -t container_runtime_exec_t p.sh
./p.sh

I've tried your script but it didn't transition to another domain unless i did a semanage -a permissive staff_t ...

Here is the avc (when staff_t=permissive) :

type=AVC msg=audit(03/31/2020 11:00:03.786:426) : avc:  denied  { nosuid_transition } for  pid=89233 comm=bash scontext=staff_u:staff_r:staff_t:s0-s0:c0.c1023 tcontext=staff_u:staff_r:container_runtime_t:s0-s0:c0.c1023 tclass=process2 permissive=1

But i see that in normal podman use case that you have authorized that : https://github.com/containers/container-selinux/blob/f00d1f4ec867be2aeb51b3b32c12a5a9a8015201/container.te#L42

Here is is ps fauxZ of a successful podman run -it alpine:latest sh :

staff_u:staff_r:staff_t:s0-s0:c0.c1023 user 22915 0.0  0.0   7132  4072 pts/1    Ss   10:27   0:00  \_ /bin/bash
staff_u:staff_r:container_runtime_t:s0-s0:c0.c1023 user 53594 0.6  0.2 1051740 58568 pts/1 Sl+ 10:42   0:00      \_ podman run -it alpine:latest sh
staff_u:staff_r:container_runtime_t:s0-s0:c0.c1023 user 53613 0.0  0.0 4372 2716 pts/1 S 10:42   0:00          \_ /usr/bin/slirp4netns --disable-host-loopback --mtu 65520 -c -e 3 -r 4 --netns-type=path /run/user/1000/netns/cni-7041947e-a275-010d-8737-069223a5f763 tap0
staff_u:staff_r:container_runtime_t:s0-s0:c0.c1023 user 43784 0.0  0.1 63344 33404 ? S 10:38   0:00 podman
staff_u:staff_r:container_runtime_t:s0-s0:c0.c1023 user 53612 0.0  0.0 5040 2052 ? Ss 10:42   0:00 /usr/bin/fuse-overlayfs -o lowerdir=/home/user/.local/share/containers/storage/overlay/l/HV2BUSAOM5FKWFVRWZVWPAIRDC,upperdir=/home/user/.local/share/containers/storage/overlay/1018760c5eb4ef9c1f8c9701ce5b3f2cb6395189e173
staff_u:staff_r:container_runtime_t:s0 user 53616 0.0  0.0  80404  2156 ?        Ssl  10:42   0:00 /usr/bin/conmon --api-version 1 -s -c cd7774b3904e8666a922ccf5a8d27a46e5256b92a150ed81d52b8b369f1f8b57 -u cd7774b3904e8666a922ccf5a8d27a46e5256b92a150ed81d52b8b369f1f8b57 -r /usr/bin/crun -b /home/user/.local/share/conta
system_u:system_r:container_t:s0:c245,c675 user 53620 0.0  0.0 1644 944 pts/0    Ss+  10:42   0:00  \_ sh

I might be wrong but if it it is a drop range problem it seems (thank's to f option of ps) that conmon is the father of the sh command inside my container. But as we can see, it doesn't have any range applied to it :/

rhatdan commented 4 years ago

If you add the nosuid_transition to the container runtime, does it work?

rhatdan commented 4 years ago

Does your user have NoNewPrivs turned on?

grep No /proc/self/status NoNewPrivs: 0

tt-why commented 4 years ago

If you add the nosuid_transition to the container runtime, does it work?

The following avc was when i was running your specified p.sh i don't think it's the problem because it doesn't occur when using podman.

Does your user have NoNewPrivs turned on?

grep No /proc/self/status NoNewPrivs: 0

Here is mine :

grep No /proc/self/status
NoNewPrivs: 0

The avc when running podman run -it alpine:latest sh is still here and is not in the staff_t domain because staff_t is in permissive mode currently.

type=AVC msg=audit(03/31/2020 17:09:11.728:878) : avc:  denied  { transition } for  pid=460003 comm=3 path=/bin/busybox dev="fuse" ino=33557878 scontext=staff_u:staff_r:container_runtime_t:s0 tcontext=system_u:system_r:container_t:s0:c461,c919 tclass=process permissive=0
smijolovic commented 4 years ago

Same issue here for buildah. For compliance purposes, our development environment does not allow unconfined SELinux users. We use the staff SEUSER, and that is not working.

Our user: uid=2100(rpmbuild) gid=2100(rpmbuild) groups=2100(rpmbuild),510(logs) context=staff_u:staff_r:staff_t:s0-s0:c0.c1023

Home mount in fstab: /dev/mapper/chaasm-home /home xfs defaults 0 0

Home directory for rpmbuild: drwx------. 19 rpmbuild rpmbuild staff_u:object_r:user_home_dir_t:s0 4096 Apr 20 21:31 rpmbuild

more /etc/subuid rpmbuild:100000:65536 more /etc/subgid rpmbuild:100000:65536

When trying to build a flannel image in buildah using either the dockerfile or converting it to a buildah script, it fails when SELinux is in enforced mode.

Both are able to pull the alpine image (after building an SELinux policy) but can not run the image and then it fails.

With buildah bud... $BUILDAH_ISOLATION=rootless buildah bud -t 0.12.0-amd64 -f $BUILD_FLANNEL/src/github.com/coreos/flannel/Dockerfile.amd64 --log-level=debug .

DEBU running [buildah-in-a-user-namespace bud -t v0.12.0-amd64 -f /home/rpmbuild/nimbus8/flannel-build/src/github.com/coreos/flannel/Dockerfile.amd64 --log-level=debug .] with environment [BUILDAH_ISOLATION=rootless LS_COLORS=rs=0:di=38;5;33:ln=38;5;51:mh=00:pi=40;38;5;11:so=38;5;13:do=38;5;5:bd=48;5;232;38;5;11:cd=48;5;232;38;5;3:or=48;5;232;38;5;9:mi=01;05;37;41:su=48;5;196;38;5;15:sg=48;5;11;38;5;16:ca=48;5;196;38;5;226:tw=48;5;10;38;5;16:ow=48;5;10;38;5;21:st=48;5;21;38;5;15:ex=38;5;40:.tar=38;5;9:.tgz=38;5;9:.arc=38;5;9:.arj=38;5;9:.taz=38;5;9:.lha=38;5;9:.lz4=38;5;9:.lzh=38;5;9:.lzma=38;5;9:.tlz=38;5;9:.txz=38;5;9:.tzo=38;5;9:.t7z=38;5;9:.zip=38;5;9:.z=38;5;9:.dz=38;5;9:.gz=38;5;9:.lrz=38;5;9:.lz=38;5;9:.lzo=38;5;9:.xz=38;5;9:.zst=38;5;9:.tzst=38;5;9:.bz2=38;5;9:.bz=38;5;9:.tbz=38;5;9:.tbz2=38;5;9:.tz=38;5;9:.deb=38;5;9:.rpm=38;5;9:.jar=38;5;9:.war=38;5;9:.ear=38;5;9:.sar=38;5;9:.rar=38;5;9:.alz=38;5;9:.ace=38;5;9:.zoo=38;5;9:.cpio=38;5;9:.7z=38;5;9:.rz=38;5;9:.cab=38;5;9:.wim=38;5;9:.swm=38;5;9:.dwm=38;5;9:.esd=38;5;9:.jpg=38;5;13:.jpeg=38;5;13:.mjpg=38;5;13:.mjpeg=38;5;13:.gif=38;5;13:.bmp=38;5;13:.pbm=38;5;13:.pgm=38;5;13:.ppm=38;5;13:.tga=38;5;13:.xbm=38;5;13:.xpm=38;5;13:.tif=38;5;13:.tiff=38;5;13:.png=38;5;13:.svg=38;5;13:.svgz=38;5;13:.mng=38;5;13:.pcx=38;5;13:.mov=38;5;13:.mpg=38;5;13:.mpeg=38;5;13:.m2v=38;5;13:.mkv=38;5;13:.webm=38;5;13:.ogm=38;5;13:.mp4=38;5;13:.m4v=38;5;13:.mp4v=38;5;13:.vob=38;5;13:.qt=38;5;13:.nuv=38;5;13:.wmv=38;5;13:.asf=38;5;13:.rm=38;5;13:.rmvb=38;5;13:.flc=38;5;13:.avi=38;5;13:.fli=38;5;13:.flv=38;5;13:.gl=38;5;13:.dl=38;5;13:.xcf=38;5;13:.xwd=38;5;13:.yuv=38;5;13:.cgm=38;5;13:.emf=38;5;13:.ogv=38;5;13:.ogx=38;5;13:.aac=38;5;45:.au=38;5;45:.flac=38;5;45:.m4a=38;5;45:.mid=38;5;45:.midi=38;5;45:.mka=38;5;45:.mp3=38;5;45:.mpc=38;5;45:.ogg=38;5;45:.ra=38;5;45:.wav=38;5;45:.oga=38;5;45:.opus=38;5;45:.spx=38;5;45:.xspf=38;5;45: SSH_CONNECTION=172.16.37.1 52280 172.16.37.187 22 LANG=en_US.UTF-8 HISTCONTROL=ignoredups HISTTIMEFORMAT=[ %FT%T ] HOSTNAME=localhost.localdomain OLDPWD=/home/rpmbuild/nimbus8/flannel-build/src/github.com/coreos XDG_SESSION_ID=25 USER=rpmbuild GOPATH=/home/rpmbuild/nimbus8/flannel-build SELINUX_ROLE_REQUESTED= PWD=/home/rpmbuild/nimbus8/flannel-build/src/github.com/coreos/flannel HOME=/home/rpmbuild SSH_CLIENT=172.16.37.1 52280 22 SELINUX_LEVEL_REQUESTED= TMPDIR=/home/rpmbuild/nimbus8/flannel-build/tmp SSH_TTY=/dev/pts/0 MAIL=/var/spool/mail/rpmbuild SHELL=/bin/bash TERM=xterm-256color SELINUX_USE_CURRENT_RANGE= TMOUT=600 SHLVL=2 LOGNAME=rpmbuild DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/2100/bus XDG_RUNTIMEDIR=/run/user/2100 PATH=/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/rpmbuild/.local/bin:/home/rpmbuild/bin:/usr/local/go14/bin:/usr/local/gotools/bin HISTSIZE=1000 LESSOPEN=||/usr/bin/lesspipe.sh %s =/usr/bin/buildah _CONTAINERS_USERNS_CONFIGURED=1], UID map [{ContainerID:0 HostID:2100 Size:1} {ContainerID:1 HostID:100000 Size:65536}], and GID map [{ContainerID:0 HostID:2100 Size:1} {ContainerID:1 HostID:100000 Size:65536}] WARN error running newuidmap: exit status 1: newuidmap: write to uid_map failed: Operation not permitted WARN falling back to single mapping
DEBU Pull Policy for pull [PullIfNewer]
DEBU umask value too restrictive. Forcing it to 022 DEBU [graphdriver] trying provided driver "overlay" DEBU overlay: mount_program=/usr/bin/fuse-overlayfs DEBU overlay: mount_program=/usr/bin/fuse-overlayfs DEBU backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false DEBU reading local Dockerfile "/home/rpmbuild/nimbus8/flannel-build/src/github.com/coreos/flannel/Dockerfile.amd64" DEBU base: "alpine"
DEBU FROM "alpine"
STEP 1: FROM alpine DEBU Loading registries configuration "/etc/containers/registries.conf" DEBU parsed reference into "[overlay@/home/rpmbuild/.local/share/containers/storage+/run/user/2100/containers:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]localhost/alpine:latest" DEBU parsed reference into "[overlay@/home/rpmbuild/.local/share/containers/storage+/run/user/2100/containers:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]localhost/alpine:latest" DEBU reference "[overlay@/home/rpmbuild/.local/share/containers/storage+/run/user/2100/containers:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]localhost/alpine:latest" does not resolve to an image ID DEBU registry "localhost" is not listed in registries configuration "/etc/containers/registries.conf", assuming it's not blocked DEBU parsed reference into "[overlay@/home/rpmbuild/.local/share/containers/storage+/run/user/2100/containers:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]localhost/alpine:latest" DEBU parsed reference into "[overlay@/home/rpmbuild/.local/share/containers/storage+/run/user/2100/containers:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]localhost/alpine:latest" DEBU copying "docker://localhost/alpine:latest" to "localhost/alpine:latest" DEBU Trying to access "localhost/alpine:latest"
DEBU Credentials not found
DEBU Using registries.d directory /etc/containers/registries.d for sigstore configuration DEBU Using "default-docker" configuration
DEBU No signature storage configuration found for localhost/alpine:latest DEBU Looking for TLS certificates and private keys in /etc/docker/certs.d/localhost DEBU GET https://localhost/v2/
DEBU Ping https://localhost/v2/ err Get "https://localhost/v2/": dial tcp [::1]:443: connect: connection refused (&url.Error{Op:"Get", URL:"https://localhost/v2/", Err:(
net.OpError)(0xc00038b090)}) DEBU GET https://localhost/v1/_ping
DEBU Ping https://localhost/v1/_ping err Get "https://localhost/v1/_ping": dial tcp [::1]:443: connect: connection refused (&url.Error{Op:"Get", URL:"https://localhost/v1/_ping", Err:(*net.OpError)(0xc00038b2c0)}) DEBU Accessing "localhost/alpine:latest" failed: error pinging docker registry localhost: Get "https://localhost/v2/": dial tcp [::1]:443: connect: connection refused DEBU error copying src image ["docker://localhost/alpine:latest"] to dest image ["localhost/alpine:latest"] err: Error initializing source docker://localhost/alpine:latest: error pinging docker registry localhost: Get "https://localhost/v2/": dial tcp [::1]:443: connect: connection refused DEBU error pulling image "docker://localhost/alpine:latest": Error initializing source docker://localhost/alpine:latest: error pinging docker registry localhost: Get "https://localhost/v2/": dial tcp [::1]:443: connect: connection refused DEBU unable to pull and read image "localhost/alpine": Error initializing source docker://localhost/alpine:latest: error pinging docker registry localhost: Get "https://localhost/v2/": dial tcp [::1]:443: connect: connection refused DEBU parsed reference into "[overlay@/home/rpmbuild/.local/share/containers/storage+/run/user/2100/containers:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]registry.access.redhat.com/alpine:latest" DEBU parsed reference into "[overlay@/home/rpmbuild/.local/share/containers/storage+/run/user/2100/containers:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]registry.access.redhat.com/alpine:latest" DEBU reference "[overlay@/home/rpmbuild/.local/share/containers/storage+/run/user/2100/containers:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]registry.access.redhat.com/alpine:latest" does not resolve to an image ID DEBU registry "registry.access.redhat.com" is not listed in registries configuration "/etc/containers/registries.conf", assuming it's not blocked DEBU parsed reference into "[overlay@/home/rpmbuild/.local/share/containers/storage+/run/user/2100/containers:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]registry.access.redhat.com/alpine:latest" DEBU parsed reference into "[overlay@/home/rpmbuild/.local/share/containers/storage+/run/user/2100/containers:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]registry.access.redhat.com/alpine:latest" DEBU copying "docker://registry.access.redhat.com/alpine:latest" to "registry.access.redhat.com/alpine:latest" DEBU Trying to access "registry.access.redhat.com/alpine:latest" DEBU Credentials not found
DEBU Using registries.d directory /etc/containers/registries.d for sigstore configuration DEBU Using "default-docker" configuration
DEBU No signature storage configuration found for registry.access.redhat.com/alpine:latest DEBU Looking for TLS certificates and private keys in /etc/docker/certs.d/registry.access.redhat.com DEBU GET https://registry.access.redhat.com/v2/
DEBU Ping https://registry.access.redhat.com/v2/ status 200 DEBU GET https://registry.access.redhat.com/v2/alpine/manifests/latest DEBU Accessing "registry.access.redhat.com/alpine:latest" failed: Error reading manifest latest in registry.access.redhat.com/alpine: name unknown: Repo not found DEBU error copying src image ["docker://registry.access.redhat.com/alpine:latest"] to dest image ["registry.access.redhat.com/alpine:latest"] err: Error initializing source docker://registry.access.redhat.com/alpine:latest: Error reading manifest latest in registry.access.redhat.com/alpine: name unknown: Repo not found DEBU error pulling image "docker://registry.access.redhat.com/alpine:latest": Error initializing source docker://registry.access.redhat.com/alpine:latest: Error reading manifest latest in registry.access.redhat.com/alpine: name unknown: Repo not found DEBU unable to pull and read image "registry.access.redhat.com/alpine": Error initializing source docker://registry.access.redhat.com/alpine:latest: Error reading manifest latest in registry.access.redhat.com/alpine: name unknown: Repo not found DEBU parsed reference into "[overlay@/home/rpmbuild/.local/share/containers/storage+/run/user/2100/containers:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]registry.fedoraproject.org/alpine:latest" DEBU parsed reference into "[overlay@/home/rpmbuild/.local/share/containers/storage+/run/user/2100/containers:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]registry.fedoraproject.org/alpine:latest" DEBU reference "[overlay@/home/rpmbuild/.local/share/containers/storage+/run/user/2100/containers:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]registry.fedoraproject.org/alpine:latest" does not resolve to an image ID DEBU registry "registry.fedoraproject.org" is not listed in registries configuration "/etc/containers/registries.conf", assuming it's not blocked DEBU parsed reference into "[overlay@/home/rpmbuild/.local/share/containers/storage+/run/user/2100/containers:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]registry.fedoraproject.org/alpine:latest" DEBU parsed reference into "[overlay@/home/rpmbuild/.local/share/containers/storage+/run/user/2100/containers:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]registry.fedoraproject.org/alpine:latest" DEBU copying "docker://registry.fedoraproject.org/alpine:latest" to "registry.fedoraproject.org/alpine:latest" DEBU Trying to access "registry.fedoraproject.org/alpine:latest" DEBU Credentials not found
DEBU Using registries.d directory /etc/containers/registries.d for sigstore configuration DEBU Using "default-docker" configuration
DEBU No signature storage configuration found for registry.fedoraproject.org/alpine:latest DEBU Looking for TLS certificates and private keys in /etc/docker/certs.d/registry.fedoraproject.org DEBU GET https://registry.fedoraproject.org/v2/
DEBU Ping https://registry.fedoraproject.org/v2/ status 200 DEBU GET https://registry.fedoraproject.org/v2/alpine/manifests/latest DEBU Accessing "registry.fedoraproject.org/alpine:latest" failed: Error reading manifest latest in registry.fedoraproject.org/alpine: manifest unknown: manifest unknown DEBU error copying src image ["docker://registry.fedoraproject.org/alpine:latest"] to dest image ["registry.fedoraproject.org/alpine:latest"] err: Error initializing source docker://registry.fedoraproject.org/alpine:latest: Error reading manifest latest in registry.fedoraproject.org/alpine: manifest unknown: manifest unknown DEBU error pulling image "docker://registry.fedoraproject.org/alpine:latest": Error initializing source docker://registry.fedoraproject.org/alpine:latest: Error reading manifest latest in registry.fedoraproject.org/alpine: manifest unknown: manifest unknown DEBU unable to pull and read image "registry.fedoraproject.org/alpine": Error initializing source docker://registry.fedoraproject.org/alpine:latest: Error reading manifest latest in registry.fedoraproject.org/alpine: manifest unknown: manifest unknown DEBU parsed reference into "[overlay@/home/rpmbuild/.local/share/containers/storage+/run/user/2100/containers:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]registry.centos.org/alpine:latest" DEBU parsed reference into "[overlay@/home/rpmbuild/.local/share/containers/storage+/run/user/2100/containers:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]registry.centos.org/alpine:latest" DEBU reference "[overlay@/home/rpmbuild/.local/share/containers/storage+/run/user/2100/containers:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]registry.centos.org/alpine:latest" does not resolve to an image ID DEBU registry "registry.centos.org" is not listed in registries configuration "/etc/containers/registries.conf", assuming it's not blocked DEBU parsed reference into "[overlay@/home/rpmbuild/.local/share/containers/storage+/run/user/2100/containers:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]registry.centos.org/alpine:latest" DEBU parsed reference into "[overlay@/home/rpmbuild/.local/share/containers/storage+/run/user/2100/containers:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]registry.centos.org/alpine:latest" DEBU copying "docker://registry.centos.org/alpine:latest" to "registry.centos.org/alpine:latest" DEBU Trying to access "registry.centos.org/alpine:latest" DEBU Credentials not found
DEBU Using registries.d directory /etc/containers/registries.d for sigstore configuration DEBU Using "default-docker" configuration
DEBU No signature storage configuration found for registry.centos.org/alpine:latest DEBU Looking for TLS certificates and private keys in /etc/docker/certs.d/registry.centos.org DEBU GET https://registry.centos.org/v2/
DEBU Ping https://registry.centos.org/v2/ status 200 DEBU GET https://registry.centos.org/v2/alpine/manifests/latest DEBU Accessing "registry.centos.org/alpine:latest" failed: Error reading manifest latest in registry.centos.org/alpine: manifest unknown: manifest unknown DEBU error copying src image ["docker://registry.centos.org/alpine:latest"] to dest image ["registry.centos.org/alpine:latest"] err: Error initializing source docker://registry.centos.org/alpine:latest: Error reading manifest latest in registry.centos.org/alpine: manifest unknown: manifest unknown DEBU error pulling image "docker://registry.centos.org/alpine:latest": Error initializing source docker://registry.centos.org/alpine:latest: Error reading manifest latest in registry.centos.org/alpine: manifest unknown: manifest unknown DEBU unable to pull and read image "registry.centos.org/alpine": Error initializing source docker://registry.centos.org/alpine:latest: Error reading manifest latest in registry.centos.org/alpine: manifest unknown: manifest unknown DEBU parsed reference into "[overlay@/home/rpmbuild/.local/share/containers/storage+/run/user/2100/containers:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]docker.io/library/alpine:latest" DEBU parsed reference into "[overlay@/home/rpmbuild/.local/share/containers/storage+/run/user/2100/containers:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]docker.io/library/alpine:latest" DEBU Trying to access "docker.io/library/alpine:latest" DEBU Credentials not found
DEBU Using registries.d directory /etc/containers/registries.d for sigstore configuration DEBU Using "default-docker" configuration
DEBU No signature storage configuration found for docker.io/library/alpine:latest DEBU Looking for TLS certificates and private keys in /etc/docker/certs.d/docker.io DEBU GET https://registry-1.docker.io/v2/
DEBU Ping https://registry-1.docker.io/v2/ status 401 DEBU GET https://auth.docker.io/token?scope=repository%3Alibrary%2Falpine%3Apull&service=registry.docker.io DEBU GET https://registry-1.docker.io/v2/library/alpine/manifests/latest DEBU GET https://registry-1.docker.io/v2/library/alpine/manifests/sha256:cb8a924afdf0229ef7515d9e5b3024e23b3eb03ddbba287f4a19c6ac90b8d221 DEBU Downloading /v2/library/alpine/blobs/sha256:a187dde48cd289ac374ad8539930628314bc581a481cdb41409c9289419ddb72 DEBU GET https://registry-1.docker.io/v2/library/alpine/blobs/sha256:a187dde48cd289ac374ad8539930628314bc581a481cdb41409c9289419ddb72 DEBU exporting opaque data as blob "sha256:a187dde48cd289ac374ad8539930628314bc581a481cdb41409c9289419ddb72" DEBU overlay: mount_data=lowerdir=/home/rpmbuild/.local/share/containers/storage/overlay/l/NUADX6MH3DN5DXH3I6V2MPPIPA,upperdir=/home/rpmbuild/.local/share/containers/storage/overlay/ce5b1d66df78ad872500099dd7530d2d27f46a04a7a0c00c145a52838be1e3d9/diff,workdir=/home/rpmbuild/.local/share/containers/storage/overlay/ce5b1d66df78ad872500099dd7530d2d27f46a04a7a0c00c145a52838be1e3d9/work,context="system_u:object_r:container_file_t:s0:c618,c1006" DEBU Container ID: 52afc2b14b987652c4468470c81a7af44c65a85dc4645ce3d8862b24216ae916 DEBU Parsed Step: {Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] Command:label Args:[maintainer Tom Denham tom@tigera.io] Flags:[] Attrs:map[] Message:LABEL maintainer "Tom Denham tom@tigera.io" Original:LABEL maintainer="Tom Denham tom@tigera.io"} STEP 2: LABEL maintainer="Tom Denham tom@tigera.io" DEBU Parsed Step: {Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] Command:env Args:[FLANNEL_ARCH amd64] Flags:[] Attrs:map[] Message:ENV FLANNEL_ARCH amd64 Original:ENV FLANNEL_ARCH=amd64} STEP 3: ENV FLANNEL_ARCH=amd64 DEBU Parsed Step: {Env:[FLANNEL_ARCH=amd64 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin FLANNEL_ARCH=amd64] Command:run Args:[apk add --no-cache iproute2 net-tools ca-certificates iptables strongswan && update-ca-certificates] Flags:[] Attrs:map[] Message:RUN apk add --no-cache iproute2 net-tools ca-certificates iptables strongswan && update-ca-certificates Original:RUN apk add --no-cache iproute2 net-tools ca-certificates iptables strongswan && update-ca-certificates} STEP 4: RUN apk add --no-cache iproute2 net-tools ca-certificates iptables strongswan && update-ca-certificates DEBU RUN imagebuilder.Run{Shell:true, Args:[]string{"apk add --no-cache iproute2 net-tools ca-certificates iptables strongswan && update-ca-certificates"}}, docker.Config{Hostname:"", Domainname:"", User:"", Memory:0, MemorySwap:0, MemoryReservation:0, KernelMemory:0, CPUShares:0, CPUSet:"", PortSpecs:[]string(nil), ExposedPorts:map[docker.Port]struct {}{}, PublishService:"", StopSignal:"", StopTimeout:0, Env:[]string{"FLANNEL_ARCH=amd64", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "", "FLANNEL_ARCH=amd64"}, Cmd:[]string{"/bin/sh"}, Shell:[]string{}, Healthcheck:(*docker.HealthConfig)(nil), DNS:[]string(nil), Image:"", Volumes:map[string]struct {}{}, VolumeDriver:"", WorkingDir:"", MacAddress:"", Entrypoint:[]string{}, SecurityOpts:[]string(nil), OnBuild:[]string{}, Mounts:[]docker.Mount(nil), Labels:map[string]string{"maintainer":"Tom Denham tom@tigera.io"}, AttachStdin:false, AttachStdout:false, AttachStderr:false, ArgsEscaped:false, Tty:false, OpenStdin:false, StdinOnce:false, NetworkDisabled:false, VolumesFrom:""} DEBU using "/home/rpmbuild/nimbus8/flannel-build/tmp/buildah491815972" to hold bundle data DEBU Forcing use of an IPC namespace.
DEBU Forcing use of a PID namespace.
DEBU Forcing use of a user namespace.
DEBU Resources: &buildah.CommonBuildOptions{AddHost:[]string{}, CgroupParent:"", CPUPeriod:0x0, CPUQuota:0, CPUShares:0x0, CPUSetCPUs:"", CPUSetMems:"", HTTPProxy:true, Memory:0, DNSSearch:[]string{}, DNSServers:[]string{}, DNSOptions:[]string{}, MemorySwap:0, LabelOpts:[]string(nil), SeccompProfilePath:"/usr/share/containers/seccomp.json", ApparmorProfile:"", ShmSize:"65536k", Ulimit:[]string{}, Volumes:[]string{}} DEBU stdio is a terminal, defaulting to using a terminal DEBU ensuring working directory "/home/rpmbuild/.local/share/containers/storage/overlay/ce5b1d66df78ad872500099dd7530d2d27f46a04a7a0c00c145a52838be1e3d9/merged" exists DEBU bind mounted "/home/rpmbuild/.local/share/containers/storage/overlay/ce5b1d66df78ad872500099dd7530d2d27f46a04a7a0c00c145a52838be1e3d9/merged" to "/home/rpmbuild/nimbus8/flannel-build/tmp/buildah491815972/mnt/rootfs" DEBU bind mounted "/home/rpmbuild/.local/share/containers/storage/overlay-containers/52afc2b14b987652c4468470c81a7af44c65a85dc4645ce3d8862b24216ae916/userdata/run/secrets" to "/home/rpmbuild/nimbus8/flannel-build/tmp/buildah491815972/mnt/buildah-bind-target-6" DEBU config = {"ociVersion":"1.0.1-dev","process":{"terminal":true,"user":{"uid":0,"gid":0},"args":["/bin/sh","-c","apk add --no-cache iproute2 net-tools ca-certificates iptables strongswan \u0026\u0026 update-ca-certificates"],"env":["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","FLANNEL_ARCH=amd64","HOSTNAME="],"cwd":"/","capabilities":{"bounding":["CAP_CHOWN","CAP_SETPCAP","CAP_DAC_OVERRIDE","CAP_MKNOD","CAP_NET_BIND_SERVICE","CAP_NET_RAW","CAP_AUDIT_WRITE","CAP_FSETID","CAP_KILL","CAP_SETFCAP","CAP_SYS_CHROOT","CAP_FOWNER","CAP_SETGID","CAP_SETUID"],"effective":["CAP_CHOWN","CAP_SETPCAP","CAP_DAC_OVERRIDE","CAP_MKNOD","CAP_NET_BIND_SERVICE","CAP_NET_RAW","CAP_AUDIT_WRITE","CAP_FSETID","CAP_KILL","CAP_SETFCAP","CAP_SYS_CHROOT","CAP_FOWNER","CAP_SETGID","CAP_SETUID"],"inheritable":["CAP_CHOWN","CAP_SETPCAP","CAP_DAC_OVERRIDE","CAP_MKNOD","CAP_NET_BIND_SERVICE","CAP_NET_RAW","CAP_AUDIT_WRITE","CAP_FSETID","CAP_KILL","CAP_SETFCAP","CAP_SYS_CHROOT","CAP_FOWNER","CAP_SETGID","CAP_SETUID"],"permitted":["CAP_CHOWN","CAP_SETPCAP","CAP_DAC_OVERRIDE","CAP_MKNOD","CAP_NET_BIND_SERVICE","CAP_NET_RAW","CAP_AUDIT_WRITE","CAP_FSETID","CAP_KILL","CAP_SETFCAP","CAP_SYS_CHROOT","CAP_FOWNER","CAP_SETGID","CAP_SETUID"],"ambient":["CAP_CHOWN","CAP_SETPCAP","CAP_DAC_OVERRIDE","CAP_MKNOD","CAP_NET_BIND_SERVICE","CAP_NET_RAW","CAP_AUDIT_WRITE","CAP_FSETID","CAP_KILL","CAP_SETFCAP","CAP_SYS_CHROOT","CAP_FOWNER","CAP_SETGID","CAP_SETUID"]},"rlimits":[{"type":"RLIMIT_NOFILE","hard":1024,"soft":1024}],"selinuxLabel":"system_u:system_r:container_t:s0:c618,c1006"},"root":{"path":"/home/rpmbuild/nimbus8/flannel-build/tmp/buildah491815972/mnt/rootfs"},"mounts":[{"destination":"/dev","type":"tmpfs","source":"/dev","options":["private","strictatime","noexec","nosuid","mode=755","size=65536k"]},{"destination":"/dev/mqueue","type":"mqueue","source":"mqueue","options":["private","nodev","noexec","nosuid"]},{"destination":"/dev/pts","type":"devpts","source":"pts","options":["private","noexec","nosuid","newinstance","ptmxmode=0666","mode=0620"]},{"destination":"/dev/shm","type":"tmpfs","source":"shm","options":["private","nodev","noexec","nosuid","mode=1777","size=65536k"]},{"destination":"/proc","type":"proc","source":"/proc","options":["private","nodev","noexec","nosuid"]},{"destination":"/sys","type":"bind","source":"/sys","options":["rbind","private","nodev","noexec","nosuid","ro"]},{"destination":"/run/secrets","type":"bind","source":"/home/rpmbuild/nimbus8/flannel-build/tmp/buildah491815972/mnt/buildah-bind-target-6","options":["bind","rprivate"]},{"destination":"/etc/hosts","type":"bind","source":"/home/rpmbuild/nimbus8/flannel-build/tmp/buildah491815972/hosts","options":["rbind"]},{"destination":"/etc/resolv.conf","type":"bind","source":"/home/rpmbuild/nimbus8/flannel-build/tmp/buildah491815972/resolv.conf","options":["rbind"]},{"destination":"/run/.containerenv","type":"bind","source":"/home/rpmbuild/nimbus8/flannel-build/tmp/buildah491815972/run/.containerenv","options":["rbind"]}],"linux":{"uidMappings":[{"containerID":0,"hostID":0,"size":1}],"gidMappings":[{"containerID":0,"hostID":0,"size":1},{"containerID":1,"hostID":1,"size":65536}],"namespaces":[{"type":"pid"},{"type":"ipc"},{"type":"mount"},{"type":"user"}],"seccomp":{"defaultAction":"SCMP_ACT_ERRNO","architectures":["SCMP_ARCH_X86_64","SCMP_ARCH_X86","SCMP_ARCH_X32"],"syscalls":[{"names":["accept"],"action":"SCMP_ACT_ALLOW"},{"names":["accept4"],"action":"SCMP_ACT_ALLOW"},{"names":["access"],"action":"SCMP_ACT_ALLOW"},{"names":["adjtimex"],"action":"SCMP_ACT_ALLOW"},{"names":["alarm"],"action":"SCMP_ACT_ALLOW"},{"names":["bind"],"action":"SCMP_ACT_ALLOW"},{"names":["brk"],"action":"SCMP_ACT_ALLOW"},{"names":["capget"],"action":"SCMP_ACT_ALLOW"},{"names":["capset"],"action":"SCMP_ACT_ALLOW"},{"names":["chdir"],"action":"SCMP_ACT_ALLOW"},{"names":["chmod"],"action":"SCMP_ACT_ALLOW"},{"names":["chown"],"action":"SCMP_ACT_ALLOW"},{"names":["chown32"],"action":"SCMP_ACT_ALLOW"},{"names":["clock_getres"],"action":"SCMP_ACT_ALLOW"},{"names":["clock_gettime"],"action":"SCMP_ACT_ALLOW"},{"names":["clock_nanosleep"],"action":"SCMP_ACT_ALLOW"},{"names":["close"],"action":"SCMP_ACT_ALLOW"},{"names":["connect"],"action":"SCMP_ACT_ALLOW"},{"names":["copy_file_range"],"action":"SCMP_ACT_ALLOW"},{"names":["creat"],"action":"SCMP_ACT_ALLOW"},{"names":["dup"],"action":"SCMP_ACT_ALLOW"},{"names":["dup2"],"action":"SCMP_ACT_ALLOW"},{"names":["dup3"],"action":"SCMP_ACT_ALLOW"},{"names":["epoll_create"],"action":"SCMP_ACT_ALLOW"},{"names":["epoll_create1"],"action":"SCMP_ACT_ALLOW"},{"names":["epoll_ctl"],"action":"SCMP_ACT_ALLOW"},{"names":["epoll_ctl_old"],"action":"SCMP_ACT_ALLOW"},{"names":["epoll_pwait"],"action":"SCMP_ACT_ALLOW"},{"names":["epoll_wait"],"action":"SCMP_ACT_ALLOW"},{"names":["epoll_wait_old"],"action":"SCMP_ACT_ALLOW"},{"names":["eventfd"],"action":"SCMP_ACT_ALLOW"},{"names":["eventfd2"],"action":"SCMP_ACT_ALLOW"},{"names":["execve"],"action":"SCMP_ACT_ALLOW"},{"names":["execveat"],"action":"SCMP_ACT_ALLOW"},{"names":["exit"],"action":"SCMP_ACT_ALLOW"},{"names":["exit_group"],"action":"SCMP_ACT_ALLOW"},{"names":["faccessat"],"action":"SCMP_ACT_ALLOW"},{"names":["fadvise64"],"action":"SCMP_ACT_ALLOW"},{"names":["fadvise64_64"],"action":"SCMP_ACT_ALLOW"},{"names":["fallocate"],"action":"SCMP_ACT_ALLOW"},{"names":["fanotify_mark"],"action":"SCMP_ACT_ALLOW"},{"names":["fchdir"],"action":"SCMP_ACT_ALLOW"},{"names":["fchmod"],"action":"SCMP_ACT_ALLOW"},{"names":["fchmodat"],"action":"SCMP_ACT_ALLOW"},{"names":["fchown"],"action":"SCMP_ACT_ALLOW"},{"names":["fchown32"],"action":"SCMP_ACT_ALLOW"},{"names":["fchownat"],"action":"SCMP_ACT_ALLOW"},{"names":["fcntl"],"action":"SCMP_ACT_ALLOW"},{"names":["fcntl64"],"action":"SCMP_ACT_ALLOW"},{"names":["fdatasync"],"action":"SCMP_ACT_ALLOW"},{"names":["fgetxattr"],"action":"SCMP_ACT_ALLOW"},{"names":["flistxattr"],"action":"SCMP_ACT_ALLOW"},{"names":["flock"],"action":"SCMP_ACT_ALLOW"},{"names":["fork"],"action":"SCMP_ACT_ALLOW"},{"names":["fremovexattr"],"action":"SCMP_ACT_ALLOW"},{"names":["fsetxattr"],"action":"SCMP_ACT_ALLOW"},{"names":["fstat"],"action":"SCMP_ACT_ALLOW"},{"names":["fstat64"],"action":"SCMP_ACT_ALLOW"},{"names":["fstatat64"],"action":"SCMP_ACT_ALLOW"},{"names":["fstatfs"],"action":"SCMP_ACT_ALLOW"},{"names":["fstatfs64"],"action":"SCMP_ACT_ALLOW"},{"names":["fsync"],"action":"SCMP_ACT_ALLOW"},{"names":["ftruncate"],"action":"SCMP_ACT_ALLOW"},{"names":["ftruncate64"],"action":"SCMP_ACT_ALLOW"},{"names":["futex"],"action":"SCMP_ACT_ALLOW"},{"names":["futimesat"],"action":"SCMP_ACT_ALLOW"},{"names":["getcpu"],"action":"SCMP_ACT_ALLOW"},{"names":["getcwd"],"action":"SCMP_ACT_ALLOW"},{"names":["getdents"],"action":"SCMP_ACT_ALLOW"},{"names":["getdents64"],"action":"SCMP_ACT_ALLOW"},{"names":["getegid"],"action":"SCMP_ACT_ALLOW"},{"names":["getegid32"],"action":"SCMP_ACT_ALLOW"},{"names":["geteuid"],"action":"SCMP_ACT_ALLOW"},{"names":["geteuid32"],"action":"SCMP_ACT_ALLOW"},{"names":["getgid"],"action":"SCMP_ACT_ALLOW"},{"names":["getgid32"],"action":"SCMP_ACT_ALLOW"},{"names":["getgroups"],"action":"SCMP_ACT_ALLOW"},{"names":["getgroups32"],"action":"SCMP_ACT_ALLOW"},{"names":["getitimer"],"action":"SCMP_ACT_ALLOW"},{"names":["getpeername"],"action":"SCMP_ACT_ALLOW"},{"names":["getpgid"],"action":"SCMP_ACT_ALLOW"},{"names":["getpgrp"],"action":"SCMP_ACT_ALLOW"},{"names":["getpid"],"action":"SCMP_ACT_ALLOW"},{"names":["getppid"],"action":"SCMP_ACT_ALLOW"},{"names":["getpriority"],"action":"SCMP_ACT_ALLOW"},{"names":["getrandom"],"action":"SCMP_ACT_ALLOW"},{"names":["getresgid"],"action":"SCMP_ACT_ALLOW"},{"names":["getresgid32"],"action":"SCMP_ACT_ALLOW"},{"names":["getresuid"],"action":"SCMP_ACT_ALLOW"},{"names":["getresuid32"],"action":"SCMP_ACT_ALLOW"},{"names":["getrlimit"],"action":"SCMP_ACT_ALLOW"},{"names":["get_robust_list"],"action":"SCMP_ACT_ALLOW"},{"names":["getrusage"],"action":"SCMP_ACT_ALLOW"},{"names":["getsid"],"action":"SCMP_ACT_ALLOW"},{"names":["getsockname"],"action":"SCMP_ACT_ALLOW"},{"names":["getsockopt"],"action":"SCMP_ACT_ALLOW"},{"names":["get_thread_area"],"action":"SCMP_ACT_ALLOW"},{"names":["gettid"],"action":"SCMP_ACT_ALLOW"},{"names":["gettimeofday"],"action":"SCMP_ACT_ALLOW"},{"names":["getuid"],"action":"SCMP_ACT_ALLOW"},{"names":["getuid32"],"action":"SCMP_ACT_ALLOW"},{"names":["getxattr"],"action":"SCMP_ACT_ALLOW"},{"names":["inotify_add_watch"],"action":"SCMP_ACT_ALLOW"},{"names":["inotify_init"],"action":"SCMP_ACT_ALLOW"},{"names":["inotify_init1"],"action":"SCMP_ACT_ALLOW"},{"names":["inotify_rm_watch"],"action":"SCMP_ACT_ALLOW"},{"names":["io_cancel"],"action":"SCMP_ACT_ALLOW"},{"names":["ioctl"],"action":"SCMP_ACT_ALLOW"},{"names":["io_destroy"],"action":"SCMP_ACT_ALLOW"},{"names":["io_getevents"],"action":"SCMP_ACT_ALLOW"},{"names":["ioprio_get"],"action":"SCMP_ACT_ALLOW"},{"names":["ioprio_set"],"action":"SCMP_ACT_ALLOW"},{"names":["io_setup"],"action":"SCMP_ACT_ALLOW"},{"names":["io_submit"],"action":"SCMP_ACT_ALLOW"},{"names":["ipc"],"action":"SCMP_ACT_ALLOW"},{"names":["kill"],"action":"SCMP_ACT_ALLOW"},{"names":["lchown"],"action":"SCMP_ACT_ALLOW"},{"names":["lchown32"],"action":"SCMP_ACT_ALLOW"},{"names":["lgetxattr"],"action":"SCMP_ACT_ALLOW"},{"names":["link"],"action":"SCMP_ACT_ALLOW"},{"names":["linkat"],"action":"SCMP_ACT_ALLOW"},{"names":["listen"],"action":"SCMP_ACT_ALLOW"},{"names":["listxattr"],"action":"SCMP_ACT_ALLOW"},{"names":["llistxattr"],"action":"SCMP_ACT_ALLOW"},{"names":["_llseek"],"action":"SCMP_ACT_ALLOW"},{"names":["lremovexattr"],"action":"SCMP_ACT_ALLOW"},{"names":["lseek"],"action":"SCMP_ACT_ALLOW"},{"names":["lsetxattr"],"action":"SCMP_ACT_ALLOW"},{"names":["lstat"],"action":"SCMP_ACT_ALLOW"},{"names":["lstat64"],"action":"SCMP_ACT_ALLOW"},{"names":["madvise"],"action":"SCMP_ACT_ALLOW"},{"names":["memfd_create"],"action":"SCMP_ACT_ALLOW"},{"names":["mincore"],"action":"SCMP_ACT_ALLOW"},{"names":["mkdir"],"action":"SCMP_ACT_ALLOW"},{"names":["mkdirat"],"action":"SCMP_ACT_ALLOW"},{"names":["mknod"],"action":"SCMP_ACT_ALLOW"},{"names":["mknodat"],"action":"SCMP_ACT_ALLOW"},{"names":["mlock"],"action":"SCMP_ACT_ALLOW"},{"names":["mlock2"],"action":"SCMP_ACT_ALLOW"},{"names":["mlockall"],"action":"SCMP_ACT_ALLOW"},{"names":["mmap"],"action":"SCMP_ACT_ALLOW"},{"names":["mmap2"],"action":"SCMP_ACT_ALLOW"},{"names":["mprotect"],"action":"SCMP_ACT_ALLOW"},{"names":["mq_getsetattr"],"action":"SCMP_ACT_ALLOW"},{"names":["mq_notify"],"action":"SCMP_ACT_ALLOW"},{"names":["mq_open"],"action":"SCMP_ACT_ALLOW"},{"names":["mq_timedreceive"],"action":"SCMP_ACT_ALLOW"},{"names":["mq_timedsend"],"action":"SCMP_ACT_ALLOW"},{"names":["mq_unlink"],"action":"SCMP_ACT_ALLOW"},{"names":["mremap"],"action":"SCMP_ACT_ALLOW"},{"names":["msgctl"],"action":"SCMP_ACT_ALLOW"},{"names":["msgget"],"action":"SCMP_ACT_ALLOW"},{"names":["msgrcv"],"action":"SCMP_ACT_ALLOW"},{"names":["msgsnd"],"action":"SCMP_ACT_ALLOW"},{"names":["msync"],"action":"SCMP_ACT_ALLOW"},{"names":["munlock"],"action":"SCMP_ACT_ALLOW"},{"names":["munlockall"],"action":"SCMP_ACT_ALLOW"},{"names":["munmap"],"action":"SCMP_ACT_ALLOW"},{"names":["nanosleep"],"action":"SCMP_ACT_ALLOW"},{"names":["newfstatat"],"action":"SCMP_ACT_ALLOW"},{"names":["_newselect"],"action":"SCMP_ACT_ALLOW"},{"names":["open"],"action":"SCMP_ACT_ALLOW"},{"names":["openat"],"action":"SCMP_ACT_ALLOW"},{"names":["pause"],"action":"SCMP_ACT_ALLOW"},{"names":["pipe"],"action":"SCMP_ACT_ALLOW"},{"names":["pipe2"],"action":"SCMP_ACT_ALLOW"},{"names":["poll"],"action":"SCMP_ACT_ALLOW"},{"names":["ppoll"],"action":"SCMP_ACT_ALLOW"},{"names":["prctl"],"action":"SCMP_ACT_ALLOW"},{"names":["pread64"],"action":"SCMP_ACT_ALLOW"},{"names":["preadv"],"action":"SCMP_ACT_ALLOW"},{"names":["preadv2"],"action":"SCMP_ACT_ALLOW"},{"names":["prlimit64"],"action":"SCMP_ACT_ALLOW"},{"names":["pselect6"],"action":"SCMP_ACT_ALLOW"},{"names":["pwrite64"],"action":"SCMP_ACT_ALLOW"},{"names":["pwritev"],"action":"SCMP_ACT_ALLOW"},{"names":["pwritev2"],"action":"SCMP_ACT_ALLOW"},{"names":["read"],"action":"SCMP_ACT_ALLOW"},{"names":["readahead"],"action":"SCMP_ACT_ALLOW"},{"names":["readlink"],"action":"SCMP_ACT_ALLOW"},{"names":["readlinkat"],"action":"SCMP_ACT_ALLOW"},{"names":["readv"],"action":"SCMP_ACT_ALLOW"},{"names":["recv"],"action":"SCMP_ACT_ALLOW"},{"names":["recvfrom"],"action":"SCMP_ACT_ALLOW"},{"names":["recvmmsg"],"action":"SCMP_ACT_ALLOW"},{"names":["recvmsg"],"action":"SCMP_ACT_ALLOW"},{"names":["remap_file_pages"],"action":"SCMP_ACT_ALLOW"},{"names":["removexattr"],"action":"SCMP_ACT_ALLOW"},{"names":["rename"],"action":"SCMP_ACT_ALLOW"},{"names":["renameat"],"action":"SCMP_ACT_ALLOW"},{"names":["renameat2"],"action":"SCMP_ACT_ALLOW"},{"names":["restart_syscall"],"action":"SCMP_ACT_ALLOW"},{"names":["rmdir"],"action":"SCMP_ACT_ALLOW"},{"names":["rt_sigaction"],"action":"SCMP_ACT_ALLOW"},{"names":["rt_sigpending"],"action":"SCMP_ACT_ALLOW"},{"names":["rt_sigprocmask"],"action":"SCMP_ACT_ALLOW"},{"names":["rt_sigqueueinfo"],"action":"SCMP_ACT_ALLOW"},{"names":["rt_sigreturn"],"action":"SCMP_ACT_ALLOW"},{"names":["rt_sigsuspend"],"action":"SCMP_ACT_ALLOW"},{"names":["rt_sigtimedwait"],"action":"SCMP_ACT_ALLOW"},{"names":["rt_tgsigqueueinfo"],"action":"SCMP_ACT_ALLOW"},{"names":["sched_getaffinity"],"action":"SCMP_ACT_ALLOW"},{"names":["sched_getattr"],"action":"SCMP_ACT_ALLOW"},{"names":["sched_getparam"],"action":"SCMP_ACT_ALLOW"},{"names":["sched_get_priority_max"],"action":"SCMP_ACT_ALLOW"},{"names":["sched_get_priority_min"],"action":"SCMP_ACT_ALLOW"},{"names":["sched_getscheduler"],"action":"SCMP_ACT_ALLOW"},{"names":["sched_rr_get_interval"],"action":"SCMP_ACT_ALLOW"},{"names":["sched_setaffinity"],"action":"SCMP_ACT_ALLOW"},{"names":["sched_setattr"],"action":"SCMP_ACT_ALLOW"},{"names":["sched_setparam"],"action":"SCMP_ACT_ALLOW"},{"names":["sched_setscheduler"],"action":"SCMP_ACT_ALLOW"},{"names":["sched_yield"],"action":"SCMP_ACT_ALLOW"},{"names":["seccomp"],"action":"SCMP_ACT_ALLOW"},{"names":["select"],"action":"SCMP_ACT_ALLOW"},{"names":["semctl"],"action":"SCMP_ACT_ALLOW"},{"names":["semget"],"action":"SCMP_ACT_ALLOW"},{"names":["semop"],"action":"SCMP_ACT_ALLOW"},{"names":["semtimedop"],"action":"SCMP_ACT_ALLOW"},{"names":["send"],"action":"SCMP_ACT_ALLOW"},{"names":["sendfile"],"action":"SCMP_ACT_ALLOW"},{"names":["sendfile64"],"action":"SCMP_ACT_ALLOW"},{"names":["sendmmsg"],"action":"SCMP_ACT_ALLOW"},{"names":["sendmsg"],"action":"SCMP_ACT_ALLOW"},{"names":["sendto"],"action":"SCMP_ACT_ALLOW"},{"names":["setfsgid"],"action":"SCMP_ACT_ALLOW"},{"names":["setfsgid32"],"action":"SCMP_ACT_ALLOW"},{"names":["setfsuid"],"action":"SCMP_ACT_ALLOW"},{"names":["setfsuid32"],"action":"SCMP_ACT_ALLOW"},{"names":["setgid"],"action":"SCMP_ACT_ALLOW"},{"names":["setgid32"],"action":"SCMP_ACT_ALLOW"},{"names":["setgroups"],"action":"SCMP_ACT_ALLOW"},{"names":["setgroups32"],"action":"SCMP_ACT_ALLOW"},{"names":["setitimer"],"action":"SCMP_ACT_ALLOW"},{"names":["setpgid"],"action":"SCMP_ACT_ALLOW"},{"names":["setpriority"],"action":"SCMP_ACT_ALLOW"},{"names":["setregid"],"action":"SCMP_ACT_ALLOW"},{"names":["setregid32"],"action":"SCMP_ACT_ALLOW"},{"names":["setresgid"],"action":"SCMP_ACT_ALLOW"},{"names":["setresgid32"],"action":"SCMP_ACT_ALLOW"},{"names":["setresuid"],"action":"SCMP_ACT_ALLOW"},{"names":["setresuid32"],"action":"SCMP_ACT_ALLOW"},{"names":["setreuid"],"action":"SCMP_ACT_ALLOW"},{"names":["setreuid32"],"action":"SCMP_ACT_ALLOW"},{"names":["setrlimit"],"action":"SCMP_ACT_ALLOW"},{"names":["set_robust_list"],"action":"SCMP_ACT_ALLOW"},{"names":["setsid"],"action":"SCMP_ACT_ALLOW"},{"names":["setsockopt"],"action":"SCMP_ACT_ALLOW"},{"names":["set_thread_area"],"action":"SCMP_ACT_ALLOW"},{"names":["set_tid_address"],"action":"SCMP_ACT_ALLOW"},{"names":["setuid"],"action":"SCMP_ACT_ALLOW"},{"names":["setuid32"],"action":"SCMP_ACT_ALLOW"},{"names":["setxattr"],"action":"SCMP_ACT_ALLOW"},{"names":["shmat"],"action":"SCMP_ACT_ALLOW"},{"names":["shmctl"],"action":"SCMP_ACT_ALLOW"},{"names":["shmdt"],"action":"SCMP_ACT_ALLOW"},{"names":["shmget"],"action":"SCMP_ACT_ALLOW"},{"names":["shutdown"],"action":"SCMP_ACT_ALLOW"},{"names":["sigaltstack"],"action":"SCMP_ACT_ALLOW"},{"names":["signalfd"],"action":"SCMP_ACT_ALLOW"},{"names":["signalfd4"],"action":"SCMP_ACT_ALLOW"},{"names":["sigreturn"],"action":"SCMP_ACT_ALLOW"},{"names":["socket"],"action":"SCMP_ACT_ALLOW"},{"names":["socketcall"],"action":"SCMP_ACT_ALLOW"},{"names":["socketpair"],"action":"SCMP_ACT_ALLOW"},{"names":["splice"],"action":"SCMP_ACT_ALLOW"},{"names":["stat"],"action":"SCMP_ACT_ALLOW"},{"names":["stat64"],"action":"SCMP_ACT_ALLOW"},{"names":["statfs"],"action":"SCMP_ACT_ALLOW"},{"names":["statfs64"],"action":"SCMP_ACT_ALLOW"},{"names":["statx"],"action":"SCMP_ACT_ALLOW"},{"names":["symlink"],"action":"SCMP_ACT_ALLOW"},{"names":["symlinkat"],"action":"SCMP_ACT_ALLOW"},{"names":["sync"],"action":"SCMP_ACT_ALLOW"},{"names":["sync_file_range"],"action":"SCMP_ACT_ALLOW"},{"names":["syncfs"],"action":"SCMP_ACT_ALLOW"},{"names":["sysinfo"],"action":"SCMP_ACT_ALLOW"},{"names":["syslog"],"action":"SCMP_ACT_ALLOW"},{"names":["tee"],"action":"SCMP_ACT_ALLOW"},{"names":["tgkill"],"action":"SCMP_ACT_ALLOW"},{"names":["time"],"action":"SCMP_ACT_ALLOW"},{"names":["timer_create"],"action":"SCMP_ACT_ALLOW"},{"names":["timer_delete"],"action":"SCMP_ACT_ALLOW"},{"names":["timerfd_create"],"action":"SCMP_ACT_ALLOW"},{"names":["timerfd_gettime"],"action":"SCMP_ACT_ALLOW"},{"names":["timerfd_settime"],"action":"SCMP_ACT_ALLOW"},{"names":["timer_getoverrun"],"action":"SCMP_ACT_ALLOW"},{"names":["timer_gettime"],"action":"SCMP_ACT_ALLOW"},{"names":["timer_settime"],"action":"SCMP_ACT_ALLOW"},{"names":["times"],"action":"SCMP_ACT_ALLOW"},{"names":["tkill"],"action":"SCMP_ACT_ALLOW"},{"names":["truncate"],"action":"SCMP_ACT_ALLOW"},{"names":["truncate64"],"action":"SCMP_ACT_ALLOW"},{"names":["ugetrlimit"],"action":"SCMP_ACT_ALLOW"},{"names":["umask"],"action":"SCMP_ACT_ALLOW"},{"names":["uname"],"action":"SCMP_ACT_ALLOW"},{"names":["unlink"],"action":"SCMP_ACT_ALLOW"},{"names":["unlinkat"],"action":"SCMP_ACT_ALLOW"},{"names":["utime"],"action":"SCMP_ACT_ALLOW"},{"names":["utimensat"],"action":"SCMP_ACT_ALLOW"},{"names":["utimes"],"action":"SCMP_ACT_ALLOW"},{"names":["vfork"],"action":"SCMP_ACT_ALLOW"},{"names":["vmsplice"],"action":"SCMP_ACT_ALLOW"},{"names":["wait4"],"action":"SCMP_ACT_ALLOW"},{"names":["waitid"],"action":"SCMP_ACT_ALLOW"},{"names":["waitpid"],"action":"SCMP_ACT_ALLOW"},{"names":["write"],"action":"SCMP_ACT_ALLOW"},{"names":["writev"],"action":"SCMP_ACT_ALLOW"},{"names":["mount"],"action":"SCMP_ACT_ALLOW"},{"names":["umount2"],"action":"SCMP_ACT_ALLOW"},{"names":["reboot"],"action":"SCMP_ACT_ALLOW"},{"names":["name_to_handle_at"],"action":"SCMP_ACT_ALLOW"},{"names":["unshare"],"action":"SCMP_ACT_ALLOW"},{"names":["personality"],"action":"SCMP_ACT_ALLOW","args":[{"index":0,"value":0,"op":"SCMP_CMP_EQ"}]},{"names":["personality"],"action":"SCMP_ACT_ALLOW","args":[{"index":0,"value":8,"op":"SCMP_CMP_EQ"}]},{"names":["personality"],"action":"SCMP_ACT_ALLOW","args":[{"index":0,"value":131072,"op":"SCMP_CMP_EQ"}]},{"names":["personality"],"action":"SCMP_ACT_ALLOW","args":[{"index":0,"value":131080,"op":"SCMP_CMP_EQ"}]},{"names":["personality"],"action":"SCMP_ACT_ALLOW","args":[{"index":0,"value":4294967295,"op":"SCMP_CMP_EQ"}]},{"names":["arch_prctl"],"action":"SCMP_ACT_ALLOW"},{"names":["modify_ldt"],"action":"SCMP_ACT_ALLOW"},{"names":["clone"],"action":"SCMP_ACT_ALLOW","args":[{"index":0,"value":2080505856,"op":"SCMP_CMP_MASKED_EQ"}]},{"names":["chroot"],"action":"SCMP_ACT_ALLOW"}]},"maskedPaths":["/proc/acpi","/proc/kcore","/proc/keys","/proc/latency_stats","/proc/timer_list","/proc/timer_stats","/proc/sched_debug","/proc/scsi","/sys/firmware","/sys/fs/cgroup","/sys/fs/selinux"],"readonlyPaths":["/proc/asound","/proc/bus","/proc/fs","/proc/irq","/proc/sys","/proc/sysrq-trigger"],"mountLabel":"system_u:object_r:container_file_t:s0:c618,c1006"}} DEBU Running ["runc" "create" "--bundle" "/home/rpmbuild/nimbus8/flannel-build/tmp/buildah491815972" "--pid-file" "/home/rpmbuild/nimbus8/flannel-build/tmp/buildah491815972/pid" "--no-new-keyring" "--console-socket" "/home/rpmbuild/nimbus8/flannel-build/tmp/buildah491815972/console.sock" "buildah-buildah491815972"] DEBU Running ["runc" "start" "buildah-buildah491815972"] DEBU socket descriptor is for "/dev/ptmx"
DEBU control messages: [{{20 1 1} [14 0 0 0]}]
DEBU fds: [14]
standard_init_linux.go:211: exec user process caused "permission denied" DEBU error building at step {Env:[FLANNEL_ARCH=amd64 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin FLANNEL_ARCH=amd64] Command:run Args:[apk add --no-cache iproute2 net-tools ca-certificates iptables strongswan && update-ca-certificates] Flags:[] Attrs:map[] Message:RUN apk add --no-cache iproute2 net-tools ca-certificates iptables strongswan && update-ca-certificates Original:RUN apk add --no-cache iproute2 net-tools ca-certificates iptables strongswan && update-ca-certificates}: error while running runtime: exit status 1 error building at STEP "RUN apk add --no-cache iproute2 net-tools ca-certificates iptables strongswan && update-ca-certificates": error while running runtime: exit status 1 ERRO exit status 1

After converting to a buildah script - seeing the same issues:

$ buildah unshare ./flannel-amd64.sh

error running container: failed to read from slirp4netns sync pipe: EOF error running container: failed to read from slirp4netns sync pipe: EOF af3f31bd234d8816a1d57ea4145c968dcab2852bbe1a7ef5d4a9758caf4ee61f 914221d232f6e85366c93e1431f59a909b399900a7610f9489a2531b8878ea3a Getting image source signatures Copying blob beee9f30bc1f skipped: already exists Copying blob ce7c2b534bf0 done Copying config 97701edc27 done Writing manifest to image destination Storing signatures 97701edc27f4b095b7314cc994e2ad889b2705a4f93add6b2877d96187e8657c

When run as root or setenforce 0 is set - it builds the image just fine.

Version: 1.14.8 Go Version: go1.14.1 Image Spec: 1.0.1-dev Runtime Spec: 1.0.1-dev CNI Spec: 0.4.0 libcni Version: image Version: 5.4.3 Git Commit: Built: Sat Apr 18 00:14:37 2020 OS/Arch: linux/amd64

system storage.conf

[storage]

Default Storage Driver driver = "overlay"

Temporary storage location runroot = "/var/run/containers/storage"

Primary Read/Write location of container storage graphroot = "/var/lib/containers/storage"

[storage.options] Storage options to be passed to underlying storage drivers

AdditionalImageStores is used to pass paths to additional Read/Only image stores Must be comma separated list. additionalimagestores = [ ]

Size is used to set a maximum size of the container image. Only supported by certain container storage drivers. size = ""

Path to an helper program to use for mounting the file system instead of mounting it directly. mount_program = "/usr/bin/fuse-overlayfs"

OverrideKernelCheck tells the driver to ignore kernel checks based on kernel version override_kernel_check = "true"

mountopt specifies comma separated list of extra mount options mountopt = "nodev,metacopy=on"

Remap-UIDs/GIDs is the mapping from UIDs/GIDs as they should appear inside of a container, to UIDs/GIDs as they should appear outside of the container, and the length of the range of UIDs/GIDs. Additional mapped sets can be listed and will be heeded by libraries, but there are limits to the number of mappings which the kernel will allow when you later attempt to run a container.

remap-uids = 0:1668442479:65536 remap-gids = 0:1668442479:65536 Remap-User/Group is a name which can be used to look up one or more UID/GID ranges in the /etc/subuid or /etc/subgid file. Mappings are set up starting with an in-container ID of 0 and the a host-level ID taken from the lowest range that matches the specified name, and using the length of that range. Additional ranges are then assigned, using the ranges which specify the lowest host-level IDs first, to the lowest not-yet-mapped container-level ID, until all of the entries have been used for maps.

remap-user = "storage" remap-group = "storage"

[storage.options.thinpool] Storage Options for thinpool

autoextend_percent determines the amount by which pool needs to be grown. This is specified in terms of % of pool size. So a value of 20 means that when threshold is hit, pool will be grown by 20% of existing pool size. autoextend_percent = "20"

autoextend_threshold determines the pool extension threshold in terms of percentage of pool size. For example, if threshold is 60, that means when pool is 60% full, threshold has been hit. autoextend_threshold = "80"

basesize specifies the size to use when creating the base device, which limits the size of images and containers. basesize = "10G"

blocksize specifies a custom blocksize to use for the thin pool. blocksize="64k"

directlvm_device specifies a custom block storage device to use for the thin pool. Required if you setup devicemapper. directlvm_device = ""

directlvm_device_force wipes device even if device already has a filesystem. directlvm_device_force = "True"

fs specifies the filesystem type to use for the base device. fs="xfs"

log_level sets the log level of devicemapper. 0: LogLevelSuppress 0 (Default) 2: LogLevelFatal 3: LogLevelErr 4: LogLevelWarn 5: LogLevelNotice 6: LogLevelInfo 7: LogLevelDebug log_level = "7"

min_free_space specifies the min free space percent in a thin pool require for new device creation to succeed. Valid values are from 0% - 99%. Value 0% disables min_free_space = "10%"

mkfsarg specifies extra mkfs arguments to be used when creating the base. device. mkfsarg = ""

use_deferred_removal marks devicemapper block device for deferred removal. If the thinpool is in use when the driver attempts to remove it, the driver tells the kernel to remove it as soon as possible. Note this does not free up the disk space, use deferred deletion to fully remove the thinpool. use_deferred_removal = "True"

use_deferred_deletion marks thinpool device for deferred deletion. If the device is busy when the driver attempts to delete it, the driver will attempt to delete device every 30 seconds until successful. If the program using the driver exits, the driver will continue attempting to cleanup the next time the driver is used. Deferred deletion permanently deletes the device and all data stored in device will be lost. use_deferred_deletion = "True"

xfs_nospace_max_retries specifies the maximum number of retries XFS should attempt to complete IO when ENOSPC (no space) error is returned by underlying storage device. xfs_nospace_max_retries = "0"

If specified, use OSTree to deduplicate files with the overlay backend ostree_repo = ""

Set to skip a PRIVATE bind mount on the storage home directory. Only supported by certain container storage drivers skip_mount_home = "false"

rpmbuild storage.conf:

[storage] driver = "overlay" runroot = "/run/user/2100/containers" graphroot = "/home/rpmbuild/.local/share/containers/storage" [storage.options] size = "" remap-uids = "" remap-gids = "" ignore_chown_errors = "" remap-user = "" remap-group = "" skip_mount_home = "" mount_program = "/usr/bin/fuse-overlayfs" mountopt = "" [storage.options.aufs] mountopt = "" [storage.options.btrfs] min_space = "" size = "" [storage.options.thinpool] autoextend_percent = "" autoextend_threshold = "" basesize = "" blocksize = "" directlvm_device = "" directlvm_device_force = "" fs = "" log_level = "" min_free_space = "" mkfsarg = "" mountopt = "" size = "" use_deferred_deletion = "" use_deferred_removal = "" xfs_nospace_max_retries = "" [storage.options.overlay] ignore_chown_errors = "" mountopt = "" mount_program = "" size = "" skip_mount_home = "" [storage.options.vfs] ignore_chown_errors = "" [storage.options.zfs] mountopt = "" fsname = "" size = ""

Started to build a SELinux policy to address this - but AVCs stop showing up even with semodule -DB. That's quite frustrating when nothing is indicating what policies are missing. This allowed me to get the alpine image to show up in buildah. The process transition keeps showing up after I run the buildah commands, so I think that is a bit more complex. What I have so far:

policy_module(buildah, 1.0.0)

######################################## #

Declarations

type container_file_t; type container_runtime_t; type container_t; type staff_t;

######################################## #

buildah local policy

#

============= container_runtime_t ==============

allow container_runtime_t container_t:process transition;

============= staff_t ==============

allow staff_t container_file_t:chr_file { ioctl read write }; allow staff_t container_file_t:dir relabelto; allow staff_t container_file_t:file relabelto; allow staff_t self:cap_userns { chown dac_override dac_read_search fowner fsetid setgid setuid };

files_mounton_rootfs(staff_t)

fs_mount_fusefs(staff_t) fs_unmount_fusefs(staff_t) fs_unmount_xattr_fs(staff_t)

FYI - this also affects podman as well. It will not run at all as the staff SEUSER with SELinux enabled...it just hangs. Have a sneaky suspicion the newuid/newgid mapping are also related. Works as root and as regular user with SElinux disabled.

rhatdan commented 3 years ago

I believe this works now.

laolux commented 3 years ago

@rhatdan sorry if I just don't understand the solution, but I cannot run containers as staff_u

Whenever I try to run a container, I get ype=AVC msg=audit(1623763861.316:852): avc: denied { transition } for pid=3641 comm="3" path="/usr/bin/bash" dev="overlay" ino=4582721 scontext=staff_u:staff_r:container_runtime_t:s0 tcontext=system_u:system_r:container_t:s0:c363,c421 tclass=process permissive=0

My teststaff user is SELinux user staff_u and has MLS/MCS Range s0-s0:c0.c1023

My OS is Fedora 34, podman is 3.2.1 and container-selinux version is 2.163.0

semanage permissive -d container_runtime_t permits me to run containers and I only get the avc denial I listed above, albeit with permissive=1.

What am I doing wrong?