containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
23.78k stars 2.42k forks source link

SELinux: container exits with code 139 #1118

Closed lukasheinrich closed 6 years ago

lukasheinrich commented 6 years ago

kind bug

Description

podman run busybox echo hello world returns exit code 139. 139 is not part of the listed exit codes in https://github.com/projectatomic/libpod/blob/master/docs/podman-run.1.md

Steps to reproduce the issue:

I'm provisioning a Vagrant Box using the vagrantfile

Vagrant.configure('2') do |config|
  config.vm.box = "fedora/27-cloud-base"

  # Docker
  config.vm.provision :docker

  # Install appc tools & rocket
  config.vm.provision :shell,inline: <<EOF

EOF
end
> vagrant ssh
Last login: Thu Jul 19 20:49:03 2018 from 10.0.2.2
[vagrant@localhost ~]$ sudo -s
[root@localhost vagrant]# podman run busybox echo hello world
[root@localhost vagrant]# echo $?
139

Output of podman version:

podman version 0.7.1

Output of podman info:


host:
  MemFree: 85467136
  MemTotal: 509480960
  SwapFree: 0
  SwapTotal: 0
  arch: amd64
  cpus: 1
  hostname: localhost.localdomain
  kernel: 4.13.9-300.fc27.x86_64
  os: linux
  uptime: 13m 1.36s
insecure registries:
  registries: []
registries:
  registries:
  - docker.io
  - registry.fedoraproject.org
  - registry.access.redhat.com
store:
  ContainerStore:
    number: 2
  GraphDriverName: overlay
  GraphOptions:
  - overlay.override_kernel_check=true
  GraphRoot: /var/lib/containers/storage
  GraphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
  ImageStore:
    number: 1
  RunRoot: /var/run/containers/storage```

**Additional environment details (AWS, VirtualBox, physical, etc.):**

see Vagrantfile above
rhatdan commented 6 years ago

@mheon could you attach the entire AVC message? THe one above is cut off.

mheon commented 6 years ago

@rhatdan https://paste.fedoraproject.org/paste/6kC66erb1w1Y3vCEA0dfKQ

rhatdan commented 6 years ago

The AVCs show me two things. One I am not sure you have the latest container-selinux installed.

dnf reinstall container-selinux And make sure it completes successfully.

You might need to update the libsemanage package or edit /etc/selinux/semanage.conf to change expand-check to be 0. There was a bad version of libsemanage that was released that caused issues with containre-selinux.

It also looks like some content in the container is labeled var_lib_t? Did you volume mount something in or is /var/lib/containers mislabeled.

restorecon -R -v /var/lib/containers

mheon commented 6 years ago

Labels on /var/lib/containers/ were all wrong - the restorecon seems to have resolved the issue.

@lukasheinrich Can you verify that the restorecon command above resolves the issue? I think we might have shipped a vagrant image with bad SELinux labels on part of the filesystem for F27

lukasheinrich commented 6 years ago

Thanks @mheon for finding the issue. I can confirm that setenforce 0 makes it work, but

[root@localhost vagrant]# podman --version
podman version 0.8.1
[root@localhost vagrant]# podman run --rm -it busybox echo hello world
[root@localhost vagrant]# echo $?
0
[root@localhost vagrant]# setenforce 0
[root@localhost vagrant]# podman run --rm -it busybox echo hello world
hello world
[root@localhost vagrant]# echo $?
0
[root@localhost vagrant]# setenforce 1
[root@localhost vagrant]# restorecon -R -v /var/lib/containers/
[root@localhost vagrant]# podman run --rm -it busybox echo hello world
[root@localhost vagrant]# echo $?
0

this is with a box

Vagrant.configure("2") do |config|
  config.vm.box = "fedora/28-cloud-base"
  config.vm.box_version = "20180425"
end

and after dnf install -y podman conmon (btw: shouldn't conmon be a dependency?)

Also, for rootless containers (the above was as root, but rootless also doesn't work unless setenforce 0 it does), nothing gets written to /var/lib/containers so just the restorecon above seems to not be enough

lukasheinrich commented 6 years ago

PS: jut noticed this is now f28, but I can try again with f27 later

mheon commented 6 years ago

So restorecon is actually breaking things on F28? Fascinating.

mheon commented 6 years ago

@baude @lsm5 conmon is a dependency of Podman on F28 already, right?

rhatdan commented 6 years ago

Yes conmon is a dependency on F28. We need a newer conmon which is why cri-o-1.11 is in updates testing.

rhatdan commented 6 years ago

@lukasheinrich What AVC's are you getting on the third example after setenforce 1

ausearch -m avc

lukasheinrich commented 6 years ago

@rhatdan https://gist.github.com/lukasheinrich/6e7c422ac924444045e109dfaba3645c

rhatdan commented 6 years ago

@lukasheinrich These are the same AVC's that @mheon showed me.

The issue again is that container-selinux did not install correctly.

dnf -y update libsemanage container-selinux dnf -y reinstall container-selinux restorecon -R -v /var/lib/containers

lukasheinrich commented 6 years ago

@rhatdan confirmed. both rootful and rootless containers work with this. So is the issue with the vagrant image or with podman?

mheon commented 6 years ago

With the F27 image, it is definitely the vagrant image - SELinux labels were wrong on /var/lib/containers in the image as shipped. Can't say for certain about F28, but I'm inclined to believe that one could be a similar case.

lukasheinrich commented 6 years ago

thanks @mheon can the images be fixed (not sure what the process is for updating vagrant images)

rhatdan commented 6 years ago

@dustymabe @lsm5 Any ideas on how to fix this?

dustymabe commented 6 years ago

With the F27 image, it is definitely the vagrant image - SELinux labels were wrong on /var/lib/containers in the image as shipped. Can't say for certain about F28, but I'm inclined to believe that one could be a similar case.

we don't release updated cloud base images. We'd like to but we don't today. We do release updated atomic host images so those should work fine. Could we possibly work around this with a RPM %post ?

mheon commented 6 years ago

We'd need a restorecon on the CRI-O and Podman RPMs in post, probably Skopeo too. I don't know how expensive this really is - if we're tacking 5 minutes on every time you install a Podman RPM it might not be a good idea.

rhatdan commented 6 years ago

We could do something in the container-selinux package so that eveyone gets the benefit, but this could be an expensive operation. I guess we could set a trigger so that it only happens once when updating the current container-selinux package.

rhatdan commented 6 years ago

Is this just an issue on F27 or F28 as well?

lukasheinrich commented 6 years ago

It's F28 as well.

On Thu, Aug 9, 2018 at 8:54 PM Daniel J Walsh notifications@github.com wrote:

Is this just an issue on F27 or F28 as well?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/projectatomic/libpod/issues/1118#issuecomment-411860568, or mute the thread https://github.com/notifications/unsubscribe-auth/ACNfA9vW0Eszs_nNdJ1Jc_7eAKW55QaRks5uPIVmgaJpZM4VXFdP .

baude commented 6 years ago

Is this still an issue?

rhatdan commented 6 years ago

I fixed container-selinux to do a better job with this. I will close and people can reopen if they see other issues.