Open MartinX3 opened 1 year ago
I agree that support for fsGroup
would be great, but in the meantime you might be looking for idmapped mounts as a workaround.
The volumeMounts[].mountPath
with volumeMounts[].name
prefixed is handled in exactly the same way as specifying it via podman run --volume and can therefore contain additional options like idmap
.
apiVersion: v1
kind: Pod
metadata:
name: idmapped-mounts-test
spec:
containers:
- name: idmapped-mounts-test
image: "registry.fedoraproject.org/fedora:37"
volumeMounts:
- mountPath: "/etc/letsencrypt:idmap"
name: test-mount
Additionally you might specify that the idmapping should be relative to the container user namespace in case you use auto user namespaces.
Here is an example how to map everything with the uid 0 and gid 0 on the host to the uid and gid 1001 in the container:
apiVersion: v1
kind: Pod
metadata:
name: idmapped-mounts-test
spec:
containers:
- name: idmapped-mounts-test
image: "registry.fedoraproject.org/fedora:37"
volumeMounts:
- mountPath: "/etc/letsencrypt:idmap=uids=@0-1001-1;gids=@0-1001-1"
name: test-mount
Unfortunately the relative idmapping is broken in 4.4.0+, but should be fixed in the next bugifx release (see #17517).
Thank you for the info.
I also found a way to access the files which is easier for me to use.
I gave the backup pod the capability CAP_DAC_READ_SEARCH
to ignore the chmod preventing it to read the files.
I think this way I can preserve the filesystem rights as well.
A friendly reminder that this issue had no activity for 30 days.
/remove stale
@giuseppe @umohnani8 WDYT?
Hi @MartinX3 are you requesting support for this in kube generate
or kube play
? Can you please give an example of the workflow you are trying to do? Example yamls would be very helpful.
I would really like securityContext.fsGroup
support in kube play
. I don't have a use-case for it in kube generate
at this point in time.
Here's a minimal example.
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test
image: cgr.dev/chainguard/busybox
command: ["/bin/busybox"]
args: ["sleep", "infinity"]
volumeMounts:
- name: test
mountPath: /home/nonroot/.local/share/database.sqlite
securityContext:
fsGroup: 65532
volumes:
- name: test
hostPath:
path: ./database.sqlite
type: File
$ touch database.sqlite
$ podman kube play pod.yaml
$ podman exec test-test /bin/busybox ls -la /home/nonroot
Current output:
drwx------ 1 nonroot nonroot 12 Aug 30 07:02 .
drwxr-xr-x 1 root root 14 Jan 1 1970 ..
drwxr-xr-x 1 root root 10 Aug 30 07:02 .local
Expected output:
drwx------ 1 nonroot nonroot 12 Aug 30 07:02 .
drwxr-xr-x 1 root root 14 Jan 1 1970 ..
drwxr-xr-x 1 nonroot nonroot 10 Aug 30 07:02 .local
@rhatdan @k8sorc can you assign this issue to me?
@AhmedGrati You got it.
and updates?
Feature request description
https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
Change recursively the ownership of the mounted (readonly) volume only for the mounting pod.
Suggest potential solution
There is none. We can't change the ownership of a mounted read only volume to get read access.
Have you considered any alternatives?
Mounting as R/W and change the ownership via
chown
would change the ownership ID for every pod mounting the volume.Additional context
Kubernetes supports it.
I need it to backup the volumes of several pods in a central borgmatic container.