openebs / lvm-localpv

Dynamically provision Stateful Persistent Node-Local Volumes & Filesystems for Kubernetes that is integrated with a backend LVM2 data storage stack.
Apache License 2.0
245 stars 92 forks source link

Error bind to existing PV: verifyMount: device already mounted #180

Closed duclm2609 closed 3 months ago

duclm2609 commented 2 years ago

Hi guys, I'm running a statefulset which using lvm-localpv as storageclass. Recently there is a requirement to expand the volume size, but since the statefulset cannot update PV size directly, I follow the following steps to increase the PV size:

  1. Delete the statefulset (with options --cascade=orphan )
  2. Delete the pod.
  3. Edit the PVC to increase storage capacity. The PV and the PVC expand as expected.
  4. Delete the PVC, remove the claimRef from PV to make it Available status
  5. Re-deploy statefulset

The statefulset succesfully create the PVC and bind it to the exepcted PV, but the pod is stuck at starting. Check the event logs I see the following errors: Warning FailedMount 88s (x19 over 24m) kubelet MountVolume.SetUp failed for volume "pvc-97fd3b2a-2482-449a-ac08-7109f8558c2f" : rpc error: code = Internal desc = verifyMount: device already mounted at [/var/lib/kubelet/pods/271055b9-4f6a-4a2d-9ebe-096f26cd81e3/volumes/kubernetes.io~csi/pvc-97fd3b2a-2482-449a-ac08-7109f8558c2f/mount]

What can I do now to resolve the problem without the risk of losing data? Any help is appreciate. Thank you.

pawanpraka1 commented 2 years ago

@duclm2609 You should not need to re-deploy the stateful set. Can you just try to resize the PVC only (setp 3 only), it should work for your use case.

dsharma-dc commented 3 months ago

I'm not sure if this is still a problem. Please provide the logs if it still concerns. The pvc seems to have not gotten unmounted before trying to mount again. However, as mentioned in previous comment, the resize should simply work without requiring most of the steps mentioned w.r.t statefulset.

dsharma-dc commented 3 months ago

I'll go ahead closing this for now. Please reopen if it is still an issue and provide provisioner logs.