Open queeup opened 1 year ago
What is the UID of the user running podman - is it 1000? Is the attempt here to have the volume owned by UID 1000 in the container, which is mapped to UID 1000 on the host? If so, --user 1000:1000
can probably be omitted, we'll do that by default on --userns=keep-id
being passed.
Yea I was trying to achieve that.
My UID (user running podman) is 1000
But I don't get it this:
With --userns=keep-id --user 1000:1000
:
Created volume 100000
but volume contents are 1000
Without --userns=keep-id
and just use --user 1000:1000
:
Created volume is 1000
but content 100999
If I create volume manually with podman volume create
, It creates 1000
then use --userns=keep-id --user 1000:1000
for container I can have what I want but I just want to have this without manually create my volume.
The 1000:1000 mapping to 100999 is definitely a bug in how we are calculating UID 1000 within the container.
Also with pods
apiVersion: v1
kind: Pod
metadata:
labels:
app: syncthing-pod
name: syncthing-pod
spec:
containers:
- image: docker.io/syncthing/syncthing:latest
name: syncthing
hostUsers: false
securityContext:
runAsGroup: 1000
runAsUser: 1000
volumeMounts:
- mountPath: /var/syncthing
name: syncthing_data-pvc
restartPolicy: Never
volumes:
- name: syncthing_data-pvc
persistentVolumeClaim:
claimName: syncthing_data-test
❯ podman kube play syncthing-pod-test.yaml
Pod:
f14bbab18068b0ca1c7267f8886e7f06a912594f93fad851ae09a410161b132f
Container:
eaf706dfa9f0f7b5a97e32f8c648af919b18b3c4504aef53fff58fac3b8765c6
❯ exa --long --octal-permissions --numeric /var/home/queeup/.local/share/containers/storage/volumes/
0700 drwx------@ - 1000 7 Dec 00:46 syncthing_data-test
❯ exa --long --octal-permissions --numeric /var/home/queeup/.local/share/containers/storage/volumes/syncthing_data-test/
0755 drwxr-xr-x@ - 100999 7 Dec 00:46 _data
With "--userns=keep-id":
❯podman kube play --userns keep-id syncthing-pod-test.yaml
Pod:
51e61e09770df91283e529a44802cf3cbce0288f30203d2352244bd3963502b9
Container:
858cdd18f0972a8f743cd2e6866a47e4dc47512c0d9c21e31f8c36652ceeaf7f
❯ exa --long --octal-permissions --numeric /var/home/queeup/.local/share/containers/storage/volumes/
0700 drwx------@ - 100000 7 Dec 00:48 syncthing_data-test
❯ exa --long --octal-permissions --numeric /var/home/queeup/.local/share/containers/storage/volumes/syncthing_data-test/
"/var/home/queeup/.local/share/containers/storage/volumes/syncthing_data-test/": Permission denied (os error 13)
❯ sudo exa --long --octal-permissions --numeric /var/home/queeup/.local/share/containers/storage/volumes/syncthing_data-test
0755 drwxr-xr-x@ - 1000 7 Dec 00:48 _data
A friendly reminder that this issue had no activity for 30 days.
I may also be experiencing an issue related to this; I am unable to start pods or containers with --uidmap
or --gidmap
.
Here is a debug log creating and starting a pod with no maps: https://pastebin.com/NgNs2tLk
Here is a debug log attempting to create and start a pod with a map (intentionally) overlapping an existing user: https://pastebin.com/eSDxz514
In my case, I want some of the programs running in the pod to run as the host system user:group 516000013:516000012.
Edit: Additional info if it is relevant:
[aceblade258@fs01 ~]$ sudo podman version
Client: Podman Engine
Version: 4.3.1
API Version: 4.3.1
Go Version: go1.18.7
Built: Fri Nov 11 08:24:13 2022
OS/Arch: linux/amd64
[aceblade258@fs01 ~]$ sudo podman info
host:
arch: amd64
buildahVersion: 1.28.0
cgroupControllers:
- cpuset
- cpu
- io
- memory
- hugetlb
- pids
- misc
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.1.5-1.fc36.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.1.5, commit: '
cpuUtilization:
idlePercent: 99.25
systemPercent: 0.39
userPercent: 0.36
cpus: 8
distribution:
distribution: fedora
version: "36"
eventLogger: journald
hostname: fs01.core.kionade.com
idMappings:
gidmap: null
uidmap: null
kernel: 6.0.15-200.fc36.x86_64
linkmode: dynamic
logDriver: journald
memFree: 133604745216
memTotal: 135075651584
networkBackend: netavark
ociRuntime:
name: crun
package: crun-1.7.2-2.fc36.x86_64
path: /usr/bin/crun
version: |-
crun version 1.7.2
commit: 0356bf4aff9a133d655dc13b1d9ac9424706cac4
rundir: /run/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +WASM:wasmedge +YAJL
os: linux
remoteSocket:
path: /run/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: false
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.2.0-0.2.beta.0.fc36.x86_64
version: |-
slirp4netns version 1.2.0-beta.0
commit: 477db14a24ff1a3de3a705e51ca2c4c1fe3dda64
libslirp: 4.6.1
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.3
swapFree: 8589930496
swapTotal: 8589930496
uptime: 3h 51m 32.00s (Approximately 0.12 days)
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
volume:
- local
registries:
search:
- registry.fedoraproject.org
- registry.access.redhat.com
- docker.io
- quay.io
store:
configFile: /usr/share/containers/storage.conf
containerStore:
number: 1
paused: 0
running: 0
stopped: 1
graphDriverName: overlay
graphOptions: {}
graphRoot: /var/lib/containers/storage
graphRootAllocated: 64956080128
graphRootUsed: 2486108160
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "true"
Supports d_type: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 3
runRoot: /run/containers/storage
volumePath: /var/lib/containers/storage/volumes
version:
APIVersion: 4.3.1
Built: 1668180253
BuiltTime: Fri Nov 11 08:24:13 2022
GitCommit: ""
GoVersion: go1.18.7
Os: linux
OsArch: linux/amd64
Version: 4.3.1
[aceblade258@fs01 ~]$ sudo rpm -q podman
podman-4.3.1-1.fc36.x86_64
A friendly reminder that this issue had no activity for 30 days.
@giuseppe PTAL
@rhatdan I'd expect your open PR to fix this issue.
The cause of the issue is that we chown the volume to the root user in the user namespace, that in the case of --userns=keep-id
is the first additional ID assigned to the user. From the configuration above, I can see it is 100000.
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
A friendly reminder that this issue had no activity for 30 days.
Any update on this? I'm having the same issue on Fedora Silverblue. It keeps using the wrong ID.
Is there any workaround for this?
My goal:
@smallest-quark It works for me when running as rootless. It also depend on the container/image, some apply permissions, which isn't really needed (the command/quadlet should manage this).
I'm running rootless and I'm using images that allow supplying
-e PUID=1000 \
-e PGID=1000 \
I'm not using Quadlets. Just podman run
@smallest-quark What's the name of the image? They cannot be run rootless in most cases unfortunately, since they probably change the PID/GID using sudo (or doas).
You have to run them as root, and it should work fine. The user/keep-id option only work, when the image itself doesn't change the ID with something like sudo.
For example this:
@smallest-quark Yeah, linuxserver
doesn't support rootless, it will not work.
The problem is that change permissions (including processes), which they shouldn't have to. Run with sudo podman ..
, and it will work fine (don't apply user/group, etc.).
For reference, I'm using Podman Quadet:
$ /etc/containers/systemd/qbittorent.container
[Unit]
Description=qBittorrent
[Service]
TimeoutStartSec=900
Restart=always
[Container]
Image=lscr.io/linuxserver/qbittorrent:latest
AutoUpdate=registry
Volume=/data/qBittorrent/config:/config:rw,Z
Volume=/data/qBittorrent/downloads:/downloads:rw,Z
Secret=tlscert,target=/run/secrets/cert.pem
Secret=tlskey,target=/run/secrets/key.pem
Environment=PUID=1000
Environment=PGID=1000
Environment=TZ=UTC
Environment=WEBUI_PORT=8090
Environment=TORRENTING_PORT=6881
Network=pi.network
PublishPort=8090:8090
PublishPort=6881:6881
PublishPort=6881:6881/udp
[Install]
WantedBy=multi-user.targer
Because they change the user in the Dockerfile
, it is still run under your provided PID/GID.
Thank you!
Hmm, that sucks a bit. I definitely want to run rootless, so then the best solution is to do the following (which I found somewhere), so I can just access and change files as 1000:1000, right?
On the host create a group with gid 100999, and add the host user to that group. It is necessary to log out and log back in again for it to take effect.
sudo addgroup --gid 100999 g100999
sudo usermod -a -G g100999 $USER
Set the directory and file permissions of the targeted directory and all its subdirectories and files.
Ensure the whole directory and all its subdirectories have group read, write, executable, and setuid permissions.
sudo find . -type d -exec chmod g+rwxs {} +
Ensure the all files in all subdirectories have group read and write permissions:
sudo find . -type f -exec chmod g+rw {} +
@smallest-quark You shouldn't need to change permissions, the linuxserver Dockefile already chmods/chowns (which they also shouldn't do).
It should work fine, but not rootless.
@smallest-quark You shouldn't need to change permissions, the linuxserver Dockefile already chmods/chowns (which they also shouldn't do).
It should work fine, but not rootless.
One thing that might work is to do:
Environment=PUID=0
Environment=PGID=0
When I did that then the volume mount on my system had the correct user/group.
I am using: lscr.io/linuxserver/qbittorrent:latest container.
Thanks so much! I've tried so much and then it's so easy.
The only issue with this is that this means the user inside the container is root.
podman exec -it prowlarr id
uid=0(root) gid=0(root) groups=0(root)
Thanks so much! I've tried so much and then it's so easy.
Glad to help!
The only issue with this is that this means the user inside the container is root.
Well it isn't really root as it is a "rootless" container. It just has a UID of 0 inside the container, which actually corresponds to the user who created the container. I've been banging my head on this for awhile and finally figured that out. Maybe it isn't the perfect solution but it seems to be working.
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
With below command I expect to have user:1000 created volumes but instead I am having user:100000
Steps to reproduce the issue:
Run syncthing container with this command:
Check owner of your
syncthing_data-test
volume created bypodman run
command .Describe the results you received:
podman run
command is creating volume owned by user 100000 if I use these--userns=keep-id --user=1000:1000
optionsDescribe the results you expected: I expected volumes are created and owned by user 1000 podman volumes while using
--userns=keep-id --user=1000:1000
options.Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info
:Package info (e.g. output of
rpm -q podman
orapt list podman
orbrew info podman
):Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?
No
Additional environment details (AWS, VirtualBox, physical, etc.):