Closed Mr-Howard-Roark closed 4 years ago
Ok so actually I do see that there is a log that shows the volume is successfully mounted to the node (/var/log/smb-driver.log). And then I do see that the files are there at /var/lib/kubelet/pods/id/volumes/microsoft.com~smb/cifs-vol-created-using-flexvol-plugin-pv
But then I guess it can't be mounted into a container that is running as a non root fsGroup Id?
Well after doing more research it turns out that Kubernetes uses chown
to change all directories and files on the volume to reflect the fsGroup, which takes lots of time and is probably the reason for the timeouts. I'll close this issue.
@Mr-Howard-Roark I have found you in multiple GH issues regarding this. Do you believe we should add fsGroup: false
to the init response to prevent this issue? I believe so, but wanted what your final experience with the plugin was.
Ya my small change on this plugin worked fine: https://github.com/Azure/kubernetes-volume-drivers/pull/84, and your change looks fine. I think that it is necessary to do this because I don't know when you'd want kubelet to try to chown all of the files of a CIFS volume, and I don't know when that would ever succeed.
This plugin has been working great for months up until today where I tried to include a securityContext in the Pod spec that uses a pvc that is bound to a pv that is configured to use this plugin. Example of what doesn't work (free written to give the gist of it)
When I don't include the securityContext, everything works great per usual. The behavior I see is the pod gets stuck in a ContainerCreating state until the volume mounts timeout. There are no meaningful logs in kubelet, docker, or the Cluster logs. Does anyone have advice on what is happening or how to see what is happening?