Open jiuchen1986 opened 2 years ago
Got the same issue, with ISCSI backend storage. k8s make unmount only once and when it gets timoeut it just forgots about it. k8s version is 1.21, @jiuchen1986 did you solve this problem?
This sounds more like a problem or inconvenience in the k8s behaviour. I'm not sure if k8s does a lazy unmount when pod goes away, or if there is a way to specify that. Taking a look...
Describe the bug: A clear and concise description of what the bug is. Sometimes when removing a Pod, which is mounted with a NFS PV, with the corresponding NFS PVC/PV simultaneously, both the Pod/PVC/PV and the backend NFS Deployment/Service/PVC/PV are cleaned so fast that the kubelet on the worker node where the pod used to run can not unmount the NFS volume in time. This makes the remaining NFS volume on the worker node stale and won't be unmounted unless manually doing so. But the IO process will be blocked there forever until rebooting the node.
It's weird though that the Pod object is successfully removed from the cluster even without kubelet completing cleaning mount on the node.
Expected behaviour: A concise description of what you expected to happen The NFS volume mounted on the worker node is cleaned up.
Steps to reproduce the bug: Steps to reproduce the bug should be clear and easily reproducible to help people gain an understanding of the problem
Set the
terminationGracePeriodSeconds
to 0 so the pod can be quickly removed when deleting it.kubectl get po - o wide
to get the node where the Pod is runningkubectl delete -f <path_file_of_above_content>
kubectl
's view will be successfully removeddf -h
which will get stuck. Then viamount
will see the NFS volume is leftoverThe output of the following commands will help us better understand what's going on:
kubectl get pods -n <openebs_namespace> --show-labels
kubectl get pvc -n <openebs_namespace>
kubectl get pvc -n <application_namespace>
Anything else we need to know?: Add any other context about the problem here.
Environment details:
kubectl get po -n openebs --show-labels
):v0.9.0
kubectl version
):cat /etc/os-release
):uname -a
):The backend storage is Ceph CSI RBD.
StorageClass: