Closed polachz closed 4 years ago
We can't set UID/GID 0 as rootless Podman - we don't have root privileges, so we should not be able to chown things to root.
Is it possible a Podman instance as root is slipping in somewhere?
I would figure this is "container/usernamespace" root as opposed to real root. Most likely one of the containers in the pod is creating content in the volume as "usernamespace" root.
Is it possible a Podman instance as root is slipping in somewhere?
No, it's not possible. Podman operations are provided only as user 1001, nowhere else
I agree with @rhatdan that it is probably caused by initial container code and that the code run with container namespace UID 0.
But another question is, why it change rights to inherited files too. Maybe it use chown -R, I don't know, but anyway, sometimes i get UID:GID 0:0 in host and then no one than root can access these files.
If it's setting them to 0:0 on the host, root Podman must be involved (or you somehow mapped root into your user namespace, which should never be done). We do not have permission to chown to 0:0 on the host, even inside our rootless user namespace.
On Wed, Mar 11, 2020, 01:06 Zdenek Polach notifications@github.com wrote:
Is it possible a Podman instance as root is slipping in somewhere? No, it's not possible. Podman operations are provided only as user 1001, nowhere else
I agree with @rhatdan https://github.com/rhatdan that it is probably caused by initial container code and that the code run with container namespace UID 0.
But another question is, why it change rights to inherited files too. Maybe it user chown -R, I don't know, but anyway, sometimes i get UID:GID 0:0 in host and then no one than root can access these files.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/containers/libpod/issues/5451?email_source=notifications&email_token=AB3AOCBJRLNQEKU6M6PDKB3RG4L4HA5CNFSM4LFGMK42YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEOOE5AQ#issuecomment-597446274, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB3AOCDHUEB5EU74Z2CD22LRG4L4HANCNFSM4LFGMK4Q .
If it's setting them to 0:0 on the host, root Podman must be involved (or you somehow mapped root into your user namespace, which should never be done). We do not have permission to chown to 0:0 on the host, even inside our rootless user namespace.
I'm not namespaces expert on Linux. I can only say that the user 1001 is not in wheel, then is not possible to use sudo to increase privileges for the user. User has been freshly created especially for rootless containers (due security) and only one modification provide has to be add ssh key to home directory. nothing else.
Maybe i forgot to mention that during container preparation i use
podman unshare chown -R 999:999 /mount/point/dir
to set owner on correct UID/GID from mapped namespace. but it succeeded and set 166534:166534 for files and dirs as expected.. Then container runs correctly several times without any problem. But randomly 0:0 owner happens.. I can't figure out reproducible steps for the issue. It sometimes just happens. Then it looks like some timing problem or race condition in Podman code for me...
Can i investigate some logs, or enable an log or watchdog until I get the issue again?
If a non root user is able to use podman or user namespace to create a real UID=0 file on disk then this is a security problem. Could you verify as root that the file is created as real root ownership? IE Check if the file is owned by root, and the user namespace that is checking is in the hosts user namespace.
# ls -l /PATH/TO/Bad/File
...
# cat /proc/self/uid_map
0 0 4294967295
@rhatdan
done as real root on the host -> conuser (UID 1001) has no access now -> permission denied
drwxr-x---. 2 root root 4096 Mar 8 18:03 config -rw-r-----. 1 root root 20711 Mar 8 18:03 graylog.conf
dump map as root and as container user
conuser@host ~$ cat /proc/self/uid_map 0 0 4294967295
root@host config$ cat /proc/self/uid_map 0 0 4294967295
==================== Just one next notice, if I remember correctly, one time i had problem to remove some files in .local/share/containers/storage/overlay-layers or overlay-containers. Not sure now exact path. It happened in my early checks to connect images together... I had to kill whole storage as root because i got access denied here for conuser too. Then rebuilt storage from scratch.
Now this part of storage is ok, then can't provide more exact info.
Your subuid and subgid files don't have 0 on the host in them, so there's no way we have permission to create or chown to 0:0 as non-root Podman in this scenario. Most likely there's something else at play here.
I'm not Linux guru... Maintaining Linux at my homelab is just hobby.
I can't say what's wrong, but this happen only when i try to run a container by podman. And just sometimes, not all the time. audit.log has no record about any of mentioned files :( Last incident influenced all three containers in the one pod. And each container has been started separately as user service with dependencies. But problem with mongo container I have got by direct run of the container by podman run.
If you need any additional checks or file dumps, please let me know...
An audit watch on the files might let us know what's changing permissions, which would help a lot
@polachz So you are saying the config directory and glaylog.conf file did not exists before running the rootless containers and now they exists with these permissions?
drwxr-x---. 2 root root 4096 Mar 8 18:03 config -rw-r-----. 1 root root 20711 Mar 8 18:03 graylog.conf
@rhatdan When last incident has occured all files and folders exists for all containers. They had UID:GID 166xxx:166xx. (depends on container user id, mongo has 999, elastic 1000 and graylog 1100)..
Containers run more times before incident smoothly, with several restarts and reboots.
After last boot containers was not able to start due permission issues and all files in storage mounts of all three containers are now owned by root:root.
The storage is on separate vhdx and mounted to main FS to /srv/volume_b: /etc/fstab... /dev/sdb1 /srv/volume_b ext4 rw,nodev,nosuid,noexec
In previous incidents, it was very similar, but i had only one container (mongo) in game. And problem occurs immediately when i tried to run container by podman run ..... with parameters. In this case storage folder already exist, byt has been owned bu 1001, or by 166xxx UID based on podman unshare chown "999":"999"
By other words, files and folders exists all the time, with rights to fit containes and afdter incident they are owned by root and inacessible by container.
I would think that some root running process, perhaps at reboot is recreating these files and that is the issue that you are seeing.
I would think that some root running process, perhaps at reboot is recreating these files and that is the issue that you are seeing.
Unfortunately, previous incidents doesn't happened during boot. It happened when i start container by podman run or podman start from conuser (1001) command line. As i mentioned before, conuser is not in sudoers file and wheel group, then here is no a way how to interfere with anything as root.
What I'm sure is that problem occurs during container boot process. When i take look to container logs, i just see nothing then correct run and then failure on permissions. See attached file, plz log_fragment.log
@rhatdan
Today I have try to build everything from scratch, then I have tried to remove all images, and download them again etc... Due this I have discovered, that many files inside user .local/share/containers is owned by root.
In the attachment you can find list of all files reported by this command:
find .local/share/containers -user root -name "*"
it's more then 500 items...
@rhatdan Anyway, I have prepared a trap by special auditd rule for chown on specific files. If it occurs again, i'll notice you here
This sounds like, somehow, rootless Podman is executing as UID 0 - which is honestly bizarre. Some of our detection for whether we are/are not rootless is based on the UID the process is running as. If Podman was somehow accidentally run as root, we wouldn't be using those directories, we'd revert to the root-owned ones in /var/lib/containers
@mheon
Hallo, the trap has been successful. And problem is on the server as you mentioned before...
It's forgotten old cron script to take ownership of un-owned files. And due userspace mapping, all files from containers looks like unowned, then script assigned them to root :( This script has been forgotten in the VM template...
Then all files in persistent storage has been taken by root and files in container storage has been probably handled through the /proc filesystem, by items looks like this "/proc/1614/fd/254"
I have to apologize for that false report.
No problem, glad to hear it's solved. Thanks for reporting back!
/kind bug
Description
I'm trying to run Graylog instance as rootless pod. Then I have three containers is same pod (mong:3, docker.elastic.co/elasticsearch/elasticsearch:6.8.7, docker.io/graylog/graylog:3.2.2).
Occasionally i got problem that start of the container(s) change UID:GID of files in container mounted volumes to 0:0 (root:root) and then all files are inaccessible for containers. Usually I have got this behavior immediately after image deploy and start, but today it happened to fully running instance after reboot. From container logs I don't see any failures. Containers has been shut down correctly and when going up again, all of them has problem to read/write files, because all are owned by root. - I mean root of the host, not root of the container.
My regular user UID/GID is 1001:1001. (conuser)
mappings:
/etc/subuid: user1:100000:65536 conuser:165536:65536
/etc/subgid user1:100000:65536 conuser:165536:65536
By my opinion user mapping failed sometime and then init process of the container change file owners to 0:0.
Steps to reproduce the issue:
Deploy container image (mongo:3 for example) and mount volume by this way: /srv/volume_b/graylog/mongo/db:/data/db:Z. mongo user INSIDE container has UID 999
Set UID/GID of the _/srv/volumeb/graylog/mongo/db to my user uid 1001:1001
Start container.
Describe the results you received:
sometimes the directory _/srv/volumeb/graylog/mongo/db has UID:GID 166534:1001. If any files are inside the db directory, it has 166534:166534 0:0 - expected. Sometimes UID:GID is 0:0 for the db directory and all files inside the directory too. - Problem
Describe the results you expected: Always the directory _/srv/volumeb/graylog/mongo/db has UID:GID 166534:1001. And all files inside the db folder has 166534:166534.
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Additional environment details (AWS, VirtualBox, physical, etc.):
Hyper-V virtual machine Centos 8 - fully patched