Closed ankursheth closed 5 years ago
I am facing a similar issue where the smb share is not unmounted.
I was able to debug the issue and that had to do with the wait time for pod deletion and kubernetes cluster setup. closing this issue, as its resolved., thanks @fstab for your comments and suggestions, it helped to resolve the issue.
I have this issue - pods using the cifs plugin all get stuck "terminating"
the only information in the log is:
unmount /var/lib/kubelet/pods/1e577dbc-c434-11e9-aecd-0050569d4986/volumes/fstab~cifs/cifs
and this repeats EVERY couple of seconds... so far without end in sight
manually unmounting - i.e.:
$VOLUME_PLUGIN_DIR/fstab~cifs/cifs unmount /var/lib/kubelet/pods/618f50ba-c5ba-11e9-8b55-0050569d4986/volumes/fstab~cifs/cifs
results in:
{"status":"Failure","message":"cifs unmount: no filesystem mounted under directory: '/var/lib/kubelet/pods/618f50ba-c5ba-11e9-8b55-0050569d4986/volumes/fstab~cifs/cifs'"}
@ankursheth what was the fix?
for me according to the logs of kubelet it stuck on trying to unmount some old mount point that was unmounted manually. And fails, stuck in retry loop. I think the problem is doUnmount() should not consider the unmount for non-existing mount point a failure. For example what Microsoft does in their cifs plugin is to return Success when the mount doesn't exist.
someone found the solution?
The best way to debug this is to uncomment these lines at the top of the script:
Then you should find a file
/tmp/cifs.log
that shows how the script was called. If the script is called with parameterunmount
there might be an issue with the script, but if there is no call with parameterunmount
then this might be a Kubernetes issue unrelated to this script.