Open tarelda opened 1 month ago
Hi @tarelda , I do see 1 lvmvolume
being present when the lvm node plugin started. Here is the add event for existing/already peovisioned volume. Unpublish was being received for the other lvmvolume also for which similar add events were not there. Meaning those 3 lvmvolume CR itself was not there.
I0511 08:15:52.611443 1 volume.go:55] Getting lvmvol object name:pvc-9ad5e4fb-5dc0-4c94-80e9-7f25a3c57627, ns:openebs from cache
But we see delete event for the same volume later in the log.
I0511 08:19:51.248012 1 volume.go:103] Got update event for deleted Vol pvc-9ad5e4fb-5dc0-4c94-80e9-7f25a3c57627, Deletion timestamp 2024-05-11 08:19:51 +0000 UTC
Few questions:
Terminating
state. Which deployment do you mean? Were they referencing to the volumes for which we are getting Unmount calls?kubectl get pvc -oyaml
and kubectl get pv -oyaml
of lvm specific Pvc/PV.pvc-243beed7-5ed6-439b-b7ac-fc60ac131f65
and pvc-031e4187-5410-4044-8ec1-ae313cf47329
- You mentioned pods from deployment were in
Terminating
state. Which deployment do you mean? Were they referencing to the volumes for which we are getting Unmount calls?- Can you send us output for
kubectl get pvc -oyaml
andkubectl get pv -oyaml
of lvm specific Pvc/PV.- Were there any lvmvolume CR manually deleted? Im curious about
pvc-243beed7-5ed6-439b-b7ac-fc60ac131f65
andpvc-031e4187-5410-4044-8ec1-ae313cf47329
I made deployment for an app that requested persistent volume through PVC with StorageClass handled by LVM plugin. Then I deleted it, because mounts weren't made in the pods. After that pods were stuck in Terminating state and volumes not deleted. Then I went to to town and deleted everything manually (including PVCs). I did it few times, hence I had multiple instances of volumes to be unmounted in the logs. As I recall this behaviour persisted even through OpenEBS redeployment with helm.
Small clarification - by manual delete of pvc I mean deleting through kubectl delete pvc
and then deleting also mount dir for example /var/snap/microk8s/common/var/lib/kubelet/pods/f746b1c2-babc-49c2-8aeb-177e3d58f61c/volumes/kubernetes.io~csi/pvc-031e4187-5410-4044-8ec1-ae313cf47329
. This was done obviously after deleting the pod f746b1c2-babc-49c2-8aeb-177e3d58f61c
.
What is strange, few days later I finally figured out that when I was installing OpenEBS I haven't corrected kubelet dir paths in values.yml to match microK8S. Since that logs finally cleaned up and volumes started to be mounted correctly. But I don't understand why then paths in openebs-lvm-localpv-node pods log were for correct kubelet directory.
@tarelda , Happy to know that everythings fine now.
Without setting the correct kubelet mount path for microk8s path never got mounted on the pod.
Im guessing in the unmount workflow kubelet knows that its microk8s platform, So it supplies correct path in the NodeUnpublishVolumeRequest when the pod was stuck in Terminating
state.
Question:
NodeVolumePublish never succeded right? Do we have logs specific to this in the issue? Wanted to check target_path
in the request.
What steps did you take and what happened: I have created simple registry deployment with one claim. Unfortunately for some reason is not getting mounted, but after I deleted deployment and pvc kubelet logs still shows that it is trying to unmount it. Also pods from deployment had to be manually deleted, because they were stuck in terminating state. Probably because they wrote to mountpoint (these was in logs before, but I manually cleaned up mountpoint directory).
What did you expect to happen: I expected to have clean environment to start over again. I don't know why it is still trying to unmount nonexistent volumes.
The output of the following commands will help us better understand what's going on: (Pasting long output into a GitHub gist or other Pastebin is fine.)
kubectl logs -f openebs-lvm-localpv-controller-7b6d6b4665-fk78q -n openebs -c openebs-lvm-plugin
kubectl logs -f openebs-lvm-localpv-node-[xxxx] -n openebs -c openebs-lvm-plugin
I included only repeating part here, but full log is here .
kubectl get pods -n openebs
kubectl get lvmvol -A -o yaml
Anything else you would like to add: I installed openebs directly through helm to get version 4.0.1 instead of microk8s default 3.10 that is installed through addon.
Environment:
kubectl version
):/etc/os-release
): Ubuntu 22.04.4 LTS