rancher / local-path-provisioner

Dynamically provisioning persistent local storage with Kubernetes
Apache License 2.0
2.25k stars 453 forks source link

`fsgroup` is not applied correctly to already existing content in PVCs #341

Open davinkevin opened 1 year ago

davinkevin commented 1 year ago

Hello 👋,

I'm using the local-path-provisioner as part of k3d, to test and validate our development and I discovered something strange with the local-path-provisioner and its conformance to the fsGroup parameter

First, all the code used in this issue is available here

With EKS cluster

First, I deploy an app, this just writes on a PVC some files. The important settings are:

$ kubectl apply -k eks/01-write/
namespace/kdavin-test-fsgroup created
configmap/fsgroup-test-9446dm7hth created
persistentvolumeclaim/fsgroup-test created
deployment.apps/fsgroup-test created
$ kubectl logs fsgroup-test-dd796cfdd-87fbm -f
total 20
drwxrwsr-x    3 root     4000          4096 Jun  1 14:28 .
drwxr-xr-x    1 root     root            43 Jun  1 14:28 ..
drwxrws---    2 root     4000         16384 Jun  1 14:28 lost+found
Hello from fsgroup-test
total 24
drwxrwsr-x    3 root     4000          4096 Jun  1 14:28 .
drwxr-xr-x    1 root     root            43 Jun  1 14:28 ..
-rw-r--r--    1 1000     4000             0 Jun  1 14:28 foo
drwxrws---    2 root     4000         16384 Jun  1 14:28 lost+found
-r-xr-xr-x    1 1000     4000            18 Jun  1 14:28 test.txt
-rw-r--r--    1 1000     4000             0 Jun  1 14:28 /test/a/b/c/subfile.txt

So, files are owned by 1000 and group is 4000.

Then, I redeploy the app with different SecurityContext settings:

$ kubectl apply -k eks/02-read/
namespace/kdavin-test-fsgroup unchanged
configmap/fsgroup-test-9446dm7hth created
persistentvolumeclaim/fsgroup-test unchanged
deployment.apps/fsgroup-test configured
$ kubectl logs fsgroup-test-77bcb759db-t7tmd
total 28
drwxrwsr-x    4 root     6000          4096 Jun  1 14:28 .
drwxr-xr-x    1 root     root            43 Jun  1 14:30 ..
drwxrwsr-x    3 1000     6000          4096 Jun  1 14:28 a
-rw-rw-r--    1 1000     6000             0 Jun  1 14:28 foo
drwxrws---    2 root     6000         16384 Jun  1 14:28 lost+found
-rwxrwxr-x    1 1000     6000            18 Jun  1 14:28 test.txt
-rw-rw-r--    1 1000     6000             0 Jun  1 14:28 /test/a/b/c/subfile.txt

We can see, 1000 is still the owner, but 6000 is now the group owner, instead of 4000 like before (following the k8s fsgroup spec).

With K3d, presumably k3s

I repeat the same thing with k3d now, with the same settings. First with:

$ kubectl apply -k k3s/01-write/
namespace/kdavin-test-fsgroup created
configmap/fsgroup-test-9446dm7hth created
persistentvolumeclaim/fsgroup-test created
deployment.apps/fsgroup-test created
$ kubectl logs fsgroup-test-79b59c9988-9jmrl
total 8
drwxrwxrwx    2 root     root          4096 Jun  1 14:33 .
drwxr-xr-x    1 root     root          4096 Jun  1 14:34 ..
Hello from fsgroup-test
total 12
drwxrwxrwx    2 root     root          4096 Jun  1 14:34 .
drwxr-xr-x    1 root     root          4096 Jun  1 14:34 ..
-rw-r--r--    1 1000     4000             0 Jun  1 14:34 foo
-r-xr-xr-x    1 1000     4000            18 Jun  1 14:34 test.txt
-rw-r--r--    1 1000     4000             0 Jun  1 14:34 /test/a/b/c/subfile.txt

All is ok, with the same value as eks. But if I apply the same change, with following settings:

$ kubectl apply -k k3s/02-read/
namespace/kdavin-test-fsgroup unchanged
configmap/fsgroup-test-789h6hh8dd created
persistentvolumeclaim/fsgroup-test unchanged
deployment.apps/fsgroup-test configured
$ kubectl logs fsgroup-test-85b478c545-l7znn
total 16
drwxrwxrwx    3 root     root          4096 Jun  1 14:34 .
drwxr-xr-x    1 root     root          4096 Jun  1 14:35 ..
drwxr-xr-x    3 1000     4000          4096 Jun  1 14:34 a
-rw-r--r--    1 1000     4000             0 Jun  1 14:34 foo
-r-xr-xr-x    1 1000     4000            18 Jun  1 14:34 test.txt
-rw-r--r--    1 1000     4000             0 Jun  1 14:34 /test/a/b/c/subfile.txt

Files are still owned by 4000 at group level, where it should be owned by 6000 now.

Conclusion

Is it a bug or an intended limitation of the local-path-provisionner? If yes, could we state it in the readme?

At implementation level, could we, for example, provide the fsGroup parameter to the setup script as an env variable to make this setup phase compatible?

As user of it, can we do something to bypass this limitation?

Additional details:

If you need some extra details, don't hesitate to ask

/cc @tomdcc @mfredenhagen @pio-kol @gurbuzali @athkalia @skurtzemann @deepy @robmoore-i

github-actions[bot] commented 5 months ago

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

davinkevin commented 5 months ago

still up

github-actions[bot] commented 3 months ago

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

deepy commented 3 months ago

Still relevant

github-actions[bot] commented 1 month ago

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

davinkevin commented 1 month ago

Never been so relevant. With some help, we could implement a fix

acalliariz commented 1 month ago

Issue is still relevant. The securityContext.fsGroup field is not respected. Is this a limitation of the underlying local or hostPath PV?