Closed debarshiray closed 6 years ago
FWIW, the result's the same if I run the above podman run
examples as root
.
@nalind @giuseppe This sounds like the chown code in container/storage is loosing the setuid bits?
I am not able to reproduce this issue with a base image:
# podman version
Version: 0.9.1
Go Version: go1.10.4
OS/Arch: linux/amd64
# podman run --rm --uidmap 1000:0:1 --uidmap 0:1:1000 --uidmap 1001:1001:64536 registry.fedoraproject.org/fedora:28 ls -l /usr/bin/su
-rwsr-xr-x. 1 root root 46128 Jul 16 11:56 /usr/bin/su
and neither with an image using layers:
$ cat Dockerfile
FROM registry.fedoraproject.org/fedora:28
RUN yum install -y sudo
# podman run -it --rm --uidmap 1000:0:1 --uidmap 0:1:1000 --uidmap 1001:1001:64536 layered-image ls -l /usr/bin/su /usr/bin/sudo
-rwsr-xr-x. 1 root root 46128 Jul 16 11:56 /usr/bin/su
---s--x--x. 1 root root 157944 Jun 29 13:00 /usr/bin/sudo
could you try with a fresh storage?
could you try with a fresh storage?
Umm... what do you mean by "fresh storage"? I guess I have to wipe out some directory somewhere?
yes, in the rootless case you'd need to rm ~/.local/share/containers/
I am not able to reproduce this issue with a base image:
I see. I still see the same behaviour with podman-0.9.2
(with the fix for #1522 cherry-picked on top). I tried the same commands and Dockerfile - both as root and without.
could you try with a fresh storage?
I noticed that I couldn't rm -rf ~/.local/share/containers/
without using sudo
. Is that indicative of something? I am using ext4
, in case that matters.
@debarshiray The reason you can not delete ~/.local/share/containers/ as non root, is their are UIDs in these directories that are different then your default UID.
You should be able to remove the content if you do a
buildah unshare
Which would make you root within your userns, and then you could delete all of the UIDS within the User Namespace.
So, i tried with the Dockerfile given (the one with yum install sudo, based on Fedora 28), built with buildah bud as simple user:
$ podman run -it --rm --uidmap 1000:0:1 --uidmap 0:1:1000 --uidmap 1001:1001:64536 1f68b79de7eb6cd0c263711e111be80b7fe06fa29db672fb1f3d4e12c8887e34 ls -l /usr/bin/su /usr/bin/sudo
-rwxr-xr-x. 1 root root 46128 Jul 16 11:56 /usr/bin/su
---s--x--x. 1 root root 157944 Jun 29 13:00 /usr/bin/sudo
Same process as root, with a image built as root with buildah:
# podman run -it --rm --uidmap 1000:0:1 --uidmap 0:1:1000 --uidmap 1001:1001:64536 9d77a544e32e4fba1fe2011c72af022b0e3fb904aab34b9501c2fc63bd7fd1f0 ls -l /usr/bin/su /usr/bin/sudo
-rwxr-xr-x. 1 root root 46328 Mar 27 09:26 /usr/bin/su
---s--x--x. 1 root root 157944 Jun 29 13:00 /usr/bin/sudo
In both case, i see that su lost the suid bit, and not sudo.
# rpm -q podman
podman-0.9.1-3.gitaba58d1.fc28.x86_64
I've opened a PR here: https://github.com/containers/storage/pull/216
Yes, containers/storage#216 fixes this problem for me. Thanks!
/kind bug
Description
While trying to get
sudo
working on the Silverblue toolbox, we discovered that some binaries are losing their SUID bits inside the toolbox container. Stripping things down to a rootlesspodman run ...
still shows the problem, even though the symptoms are slightly altered.Let's play with the
fedora:28
image that comes with/usr/bin/su
.First, a simple
podman run
:Now, we try to specify the UID mapping like we do in the Silverblue toolbox:
So far, so good.
Now, let's try the
fedora-toolbox:28
image that, among other things, layerssudo
over thefedora:28
image.Like before, a simple
podman run
:Still good.
Now with the UID mappings:
Notice how the
/usr/bin/su
binary no longer has the SUID bit.Note that the Silverblue toolbox doesn't actually use
podman run
nor does it enter the container asroot
. Instead, it usespodman create
,podman start
andpodman exec
, and enters the container as$USER
. So this was an attempt at a more self-contained test case. The Silverblue toolbox will show similar, even if slightly different, symptoms.Output of
podman version
:Note that this is
podman-0.9.1.1
with the fix for #1452 cherry-picked on top.Output of
podman info
:Additional environment details (AWS, VirtualBox, physical, etc.):
This is a physical laptop running Fedora 28 Silverblue 28.20180918.0.