Closed lukasheinrich closed 6 years ago
@mheon could you attach the entire AVC message? THe one above is cut off.
The AVCs show me two things. One I am not sure you have the latest container-selinux installed.
dnf reinstall container-selinux And make sure it completes successfully.
You might need to update the libsemanage package or edit /etc/selinux/semanage.conf to change expand-check to be 0. There was a bad version of libsemanage that was released that caused issues with containre-selinux.
It also looks like some content in the container is labeled var_lib_t? Did you volume mount something in or is /var/lib/containers mislabeled.
restorecon -R -v /var/lib/containers
Labels on /var/lib/containers/ were all wrong - the restorecon seems to have resolved the issue.
@lukasheinrich Can you verify that the restorecon command above resolves the issue? I think we might have shipped a vagrant image with bad SELinux labels on part of the filesystem for F27
Thanks @mheon for finding the issue. I can confirm that setenforce 0
makes it work, but
[root@localhost vagrant]# podman --version
podman version 0.8.1
[root@localhost vagrant]# podman run --rm -it busybox echo hello world
[root@localhost vagrant]# echo $?
0
[root@localhost vagrant]# setenforce 0
[root@localhost vagrant]# podman run --rm -it busybox echo hello world
hello world
[root@localhost vagrant]# echo $?
0
[root@localhost vagrant]# setenforce 1
[root@localhost vagrant]# restorecon -R -v /var/lib/containers/
[root@localhost vagrant]# podman run --rm -it busybox echo hello world
[root@localhost vagrant]# echo $?
0
this is with a box
Vagrant.configure("2") do |config|
config.vm.box = "fedora/28-cloud-base"
config.vm.box_version = "20180425"
end
and after dnf install -y podman conmon
(btw: shouldn't conmon be a dependency?)
Also, for rootless containers (the above was as root, but rootless also doesn't work unless setenforce 0 it does), nothing gets written to /var/lib/containers so just the restorecon above seems to not be enough
PS: jut noticed this is now f28, but I can try again with f27 later
So restorecon
is actually breaking things on F28? Fascinating.
@baude @lsm5 conmon
is a dependency of Podman on F28 already, right?
Yes conmon is a dependency on F28. We need a newer conmon which is why cri-o-1.11 is in updates testing.
@lukasheinrich What AVC's are you getting on the third example after setenforce 1
ausearch -m avc
@lukasheinrich These are the same AVC's that @mheon showed me.
The issue again is that container-selinux did not install correctly.
dnf -y update libsemanage container-selinux dnf -y reinstall container-selinux restorecon -R -v /var/lib/containers
@rhatdan confirmed. both rootful and rootless containers work with this. So is the issue with the vagrant image or with podman?
With the F27 image, it is definitely the vagrant image - SELinux labels were wrong on /var/lib/containers in the image as shipped. Can't say for certain about F28, but I'm inclined to believe that one could be a similar case.
thanks @mheon can the images be fixed (not sure what the process is for updating vagrant images)
@dustymabe @lsm5 Any ideas on how to fix this?
With the F27 image, it is definitely the vagrant image - SELinux labels were wrong on /var/lib/containers in the image as shipped. Can't say for certain about F28, but I'm inclined to believe that one could be a similar case.
we don't release updated cloud base images. We'd like to but we don't today. We do release updated atomic host images so those should work fine. Could we possibly work around this with a RPM %post
?
We'd need a restorecon on the CRI-O and Podman RPMs in post, probably Skopeo too. I don't know how expensive this really is - if we're tacking 5 minutes on every time you install a Podman RPM it might not be a good idea.
We could do something in the container-selinux package so that eveyone gets the benefit, but this could be an expensive operation. I guess we could set a trigger so that it only happens once when updating the current container-selinux package.
Is this just an issue on F27 or F28 as well?
It's F28 as well.
On Thu, Aug 9, 2018 at 8:54 PM Daniel J Walsh notifications@github.com wrote:
Is this just an issue on F27 or F28 as well?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/projectatomic/libpod/issues/1118#issuecomment-411860568, or mute the thread https://github.com/notifications/unsubscribe-auth/ACNfA9vW0Eszs_nNdJ1Jc_7eAKW55QaRks5uPIVmgaJpZM4VXFdP .
Is this still an issue?
I fixed container-selinux to do a better job with this. I will close and people can reopen if they see other issues.
kind bug
Description
podman run busybox echo hello world
returns exit code 139. 139 is not part of the listed exit codes in https://github.com/projectatomic/libpod/blob/master/docs/podman-run.1.mdSteps to reproduce the issue:
I'm provisioning a Vagrant Box using the vagrantfile
Output of
podman version
:Output of
podman info
: