Open jan-hudec opened 6 months ago
Also looking for this please. There's a lot of value in avoiding granting the creation of PVs to non-admin user like the Azure CSI driver equivalant.
just want to add this feature would be extremely useful for me.
We have 1 namespace per development-team on our clusters, and having a cluster-wide declared PV is a huge security-issue for us. We need a way to ensure that one team cant mount an SMB using a PV with credentials from another team. If this is already possible then please explain how this is achieved.
Create the PVC first. When you create the PV, ensure you name it so that the PVC attaches to it. Once the PVC is attached, the PV can't be used by any other PVC.
If your cluster has features that enforce containers to run with a given UID, make use of that in the PV's mount options so that even if the wrong namespace attaches to the PV, the kernel will prevent processes inside the pod from accessing any files.
Yes, I can do that, but the teams have no access to cluster-wide resources, so they can't do it themself... and I don't want to be a blocker in the teams using SMB-shares because I have to create a PV when they ask for it (and I have the time) By supporting the 'inline' method this driver would support a secure self-service scenario that I really think a lot of organizations would like.
but until the time that inline is supported this is the best we can do, despite having the teams wait in line. (because we are not relaxing RBAC to allow any team to create/edit a PV... then all h_ll can break loose)
Is your feature request related to a problem?/Why is this needed
I have a Kubernetes application that needs to access a SMB3 share exposed by non-Kubernetes server.
In the past I used the flexVolume driver
juliohm/cifs
, specified directly in the pod spec of the deployment. But that driver has disappeared and cannot be easily installed any more. So I am looking to replace it with a CSI one.Unfortunately inline volume is not supported, and creating a PV for this feels wrong, because the PV is not managed by the cluster. Instead, it is specific set of mount parameters to be used by that app and that app only.
Describe the solution you'd like in detail
in the pod spec.
Describe alternatives you've considered
I can create a PV and PVC for that volume, but
I can make the pod privileged and simply run a
inside.
I could even modify the application to use a userland CIFS/SMB3 library.
Additional context
I've seen some talk about this being not implemented for NFS for security reasons, but because of the last option I don't believe there actually are security reasons here. The application operator can access the share anyway if they have the credentials, and mounting it doesn't eat more resources than accessing it in some other way.