kubernetes-sigs / vsphere-csi-driver

vSphere storage Container Storage Interface (CSI) plugin
https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/index.html
Apache License 2.0
293 stars 177 forks source link

FailedMount - FailedPrecondition - Volume does not appear staged #341

Closed mohideen closed 1 year ago

mohideen commented 4 years ago

Is this a BUG REPORT or FEATURE REQUEST?: /kind bug

What happened: While deleing and (re-)applying the kubernetes statefulset configuration. Some of the pods that use vSphere CSI based persistent volumes are failing start and are stuck at ContainerCreating state. This happens randomly to some pods. There are other pods which also use vSphere CSI based PV and are also scheduled on the same worker node which starts without any issues.

Currently, we have 2 pods in this failed state. Here is the link to the Gist containing related k8s objects describe output and vSphere CSI logs: https://gist.github.com/mohideen/3b0cfa29f03d10fb7e1170f5729cb528

What you expected to happen: The pods should start successfully with the corresponding PV mounted on it.

How to reproduce it (as minimally and precisely as possible):

  1. Create a PVC with using a vSphere storage class.
  2. Create a Statefulset and configure it use the above PVC
  3. Verify that the pods has the volume mounted to it.
  4. Delete the Statefulset
  5. Redeploy the same Statefulset
  6. Check the status of the pod
    • The pod usually starts correctly.
    • On some random occasion, the pod fails to start and it is stuck in ContainerCreating state with the message that MountVolume.SetUp failed for volume (Note: This has happened only thrice so far in the last month)

Anything else we need to know?:

Environment:

RaunakShah commented 4 years ago

Hmm havent seen this one before. Looking at your vSphere CSI node logs, i see this:

time="2020-08-31T14:07:17Z" level=debug msg="/csi.v1.Node/NodePublishVolume: REP 1640829: rpc error: code = FailedPrecondition desc = Volume ID: \"786323a5-58be-49d9-b532-879125a042bd\" does not appear staged to \"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-906fbbe7-029f-4b71-9da1-f5d7fee0a051/globalmount\""

Can you look at your node daemonset logs for this volume ID 786323a5-58be-49d9-b532-879125a042bd beginning with NodeStageVolume ...?

mohideen commented 4 years ago

The application stack consists of three Statefulsets with a vSphere CSI PV each. When I attempted to delete and redeploy the app stack, I only saw the NodeStageVolume message for two of three volumes. And, there was not a NodeStageVolume for the volume 786323a5-58be-49d9-b532-879125a042bd which is currently failing to mount.

time="2020-09-01T15:11:02Z" level=debug msg="/csi.v1.Node/NodeStageVolume: REQ 740988: VolumeId=29123695-dfed-49ab-a77e-527d360bb06a, PublishContext=map[diskUUID:6000c2968891f512e0177da95e7c3d3b type:vSphere CNS Block Volume], StagingTargetPath=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0ea7be23-d53c-4eb3-bfb3-6c9af1363fbc/globalmount, VolumeCapability=mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER > , VolumeContext=map[storage.kubernetes.io/csiProvisionerIdentity:1596082748155-8081-csi.vsphere.vmware.com type:vSphere CNS Block Volume], XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
{"level":"info","time":"2020-09-01T15:11:02.374899422Z","caller":"service/node.go:96","msg":"NodeStageVolume: called with args {VolumeId:29123695-dfed-49ab-a77e-527d360bb06a PublishContext:map[diskUUID:6000c2968891f512e0177da95e7c3d3b type:vSphere CNS Block Volume] StagingTargetPath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0ea7be23-d53c-4eb3-bfb3-6c9af1363fbc/globalmount VolumeCapability:mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER >  Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1596082748155-8081-csi.vsphere.vmware.com type:vSphere CNS Block Volume] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}","TraceId":"7c65944c-a9c0-406d-b12b-dcd745b57c74"}
{"level":"info","time":"2020-09-01T15:11:02.374968044Z","caller":"service/node.go:145","msg":"NodeStageVolume: volID \"29123695-dfed-49ab-a77e-527d360bb06a\", published context map[diskUUID:6000c2968891f512e0177da95e7c3d3b type:vSphere CNS Block Volume], diskID \"6000c2968891f512e0177da95e7c3d3b\"","TraceId":"7c65944c-a9c0-406d-b12b-dcd745b57c74"}
time="2020-09-01T15:11:02Z" level=debug msg="/csi.v1.Node/NodeStageVolume: REP 740988: XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
time="2020-09-01T15:11:07Z" level=debug msg="/csi.v1.Node/NodeStageVolume: REQ 2102610: VolumeId=dcc8129e-2646-4bf7-9cfe-a5758fcc2164, PublishContext=map[diskUUID:6000c29c86ac6f2c1f39d66d7d4758db type:vSphere CNS Block Volume], StagingTargetPath=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-196a6253-3cc6-4576-a38a-fe11acb3f56c/globalmount, VolumeCapability=mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER > , VolumeContext=map[storage.kubernetes.io/csiProvisionerIdentity:1596082748155-8081-csi.vsphere.vmware.com type:vSphere CNS Block Volume], XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
{"level":"info","time":"2020-09-01T15:11:07.09063662Z","caller":"service/node.go:96","msg":"NodeStageVolume: called with args {VolumeId:dcc8129e-2646-4bf7-9cfe-a5758fcc2164 PublishContext:map[diskUUID:6000c29c86ac6f2c1f39d66d7d4758db type:vSphere CNS Block Volume] StagingTargetPath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-196a6253-3cc6-4576-a38a-fe11acb3f56c/globalmount VolumeCapability:mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER >  Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1596082748155-8081-csi.vsphere.vmware.com type:vSphere CNS Block Volume] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}","TraceId":"f22d6bbb-248e-42d5-b53d-b4fb6f21d9bf"}
{"level":"info","time":"2020-09-01T15:11:07.090703654Z","caller":"service/node.go:145","msg":"NodeStageVolume: volID \"dcc8129e-2646-4bf7-9cfe-a5758fcc2164\", published context map[diskUUID:6000c29c86ac6f2c1f39d66d7d4758db type:vSphere CNS Block Volume], diskID \"6000c29c86ac6f2c1f39d66d7d4758db\"","TraceId":"f22d6bbb-248e-42d5-b53d-b4fb6f21d9bf"}
time="2020-09-01T15:11:07Z" level=debug msg="/csi.v1.Node/NodeStageVolume: REP 2102610: XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"

But, I did see that three volumeattachment objects being deleted and re-created as I deleted and redeployed the corresponding statefulsets.

mohideen commented 4 years ago

I looked at the kubelet logs and found that there were orphaned pods that were still holding onto the volumes.

Sep 01 16:28:51 worker1 kubelet[5372]: E0901 16:28:51.447939    5372 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/csi.vsphere.vmware.com^e1119998-d9fa-4fe1-bb0a-0493ece2d0b7 podName: nodeName:}" failed. No retries permitted until 2020-09-01 16:28:51.947908228 -0400 EDT m=+46.771735774 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"pvc-54593233-12ba-4fae-8b08-e6b87be0b3b6\" (UniqueName: \"kubernetes.io/csi/csi.vsphere.vmware.com^e1119998-d9fa-4fe1-bb0a-0493ece2d0b7\") pod \"drupal-staff-blog-db-0\" (UID: \"38772fac-91f3-40d8-b3d9-46ca1777b08c\") : rpc error: code = FailedPrecondition desc = Volume ID: \"e1119998-d9fa-4fe1-bb0a-0493ece2d0b7\" does not appear staged to \"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-54593233-12ba-4fae-8b08-e6b87be0b3b6/globalmount\""
Sep 01 16:28:51 worker1 kubelet[5372]: I0901 16:28:51.518533    5372 reconciler.go:196] operationExecutor.UnmountVolume started for volume "pvc-54593233-12ba-4fae-8b08-e6b87be0b3b6" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^e1119998-d9fa-4fe1-bb0a-0493ece2d0b7") pod "6bca1939-78b7-45e0-bf1c-4fbfe47d7bb5" (UID: "6bca1939-78b7-45e0-bf1c-4fbfe47d7bb5")
Sep 01 16:28:51 worker1 kubelet[5372]: I0901 16:28:51.518687    5372 clientconn.go:106] parsed scheme: ""
Sep 01 16:28:51 worker1 kubelet[5372]: I0901 16:28:51.518702    5372 clientconn.go:106] scheme "" not registered, fallback to default scheme
Sep 01 16:28:51 worker1 kubelet[5372]: I0901 16:28:51.518752    5372 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins/csi.vsphere.vmware.com/csi.sock  <nil> 0 <nil>}] <nil> <nil>}
Sep 01 16:28:51 worker1 kubelet[5372]: I0901 16:28:51.518761    5372 clientconn.go:933] ClientConn switching balancer to "pick_first"
Sep 01 16:28:51 worker1 kubelet[5372]: I0901 16:28:51.518795    5372 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
Sep 01 16:28:51 worker1 kubelet[5372]: E0901 16:28:51.527040    5372 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/csi.vsphere.vmware.com^e1119998-d9fa-4fe1-bb0a-0493ece2d0b7 podName:6bca1939-78b7-45e0-bf1c-4fbfe47d7bb5 nodeName:}" failed. No retries permitted until 2020-09-01 16:28:52.027008304 -0400 EDT m=+46.850835874 (durationBeforeRetry 500ms). Error: "UnmountVolume.TearDown failed for volume \"pvc-54593233-12ba-4fae-8b08-e6b87be0b3b6\" (UniqueName: \"kubernetes.io/csi/csi.vsphere.vmware.com^e1119998-d9fa-4fe1-bb0a-0493ece2d0b7\") pod \"6bca1939-78b7-45e0-bf1c-4fbfe47d7bb5\" (UID: \"6bca1939-78b7-45e0-bf1c-4fbfe47d7bb5\") : kubernetes.io/csi: mounter.TearDownAt failed: rpc error: code = Internal desc = error getting block device for volume: e1119998-d9fa-4fe1-bb0a-0493ece2d0b7, err: lstat /dev/sdf: no such file or directory"

I umounted the volume from the orphaned pod directory (eg. /var/lib/kubelet/pods/abd54204-56d7-4b5d-b1b1-994a81ee44aa/volumes/kubernetes.io~csi/pvc-906fbbe7-029f-4b71-9da1-f5d7fee0a051/mount, deleted that directory associated with the non-existent pod (eg. /var/lib/kubelet/pods/abd54204-56d7-4b5d-b1b1-994a81ee44aa), and restarted the corresponding k8s node. When the node came back online, the volume were mounted correctly to the necessary pod.

Probably related to https://github.com/kubernetes/kubernetes/issues/60987

The pods were functioning properly with PV mounted to them when we deleted the associated statefulset for a regular application version update. When we deployed the statefulset again with the new application configuration changes, the pod failed to start with the PV error.

Do you know what could have caused this situation?

RaunakShah commented 4 years ago

@mohideen looks like kubelet/kube-controller-manager has an issue. I'm not too sure what the bug is but CSI depends on kubelet to call NodeStageVolume and NodePublishVolume callbacks. I think the only solution for now is to do what you've done. If you can repro the issue consistently, i'd suggest increasing kubelet log level and capturing the logs.

gohilankit commented 3 years ago

/assign @RaunakShah

RaunakShah commented 3 years ago

@mohideen have you been able to reproduce this issue?

Anil-YadavK8s commented 3 years ago

@RaunakShah We are getting same issue.

When Pods gets deleted ..PVCs are not mounted back to Pod. It give below error.

Events: Type Reason Age From Message


Normal Scheduled default-scheduler Successfully assigned af-system/maf-dropl-main-af-0 to seliicbl01483-k8sw Warning FailedMount 3s (x7 over 35s) kubelet, seliicbl01483-k8sw MountVolume.SetUp failed for volume "pvc-50377f98-7718-4da4-9ea1-d17c1f25be17" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = FailedPrecondition desc = Volume ID: "88f42fd9-1ff8-4b54-81af-aeb042070d6d" does not appear staged to "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-50377f98-7718-4da4-9ea1-d17c1f25be17/globalmount" Warning FailedMount 1s (x8 over 39s) kubelet, seliicbl01483-k8sw MountVolume.SetUp failed for volume "pvc-1fa045f1-bb24-42ed-a6c1-113824db6de2" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = FailedPrecondition desc = Volume ID: "36c67954-827d-45ae-ad1f-8c9248156a03" does not appear staged to "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1fa045f1-bb24-42ed-a6c1-113824db6de2/globalmount" Warning FailedMount 0s (x7 over 32s) kubelet, seliicbl01483-k8sw MountVolume.SetUp failed for volume "pvc-d749637b-4a2d-4430-ba03-be56f03abdc6" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = FailedPrecondition desc = Volume ID: "73675840-6547-42fb-a139-09bbb0cc3c1a" does not appear staged to "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d749637b-4a2d-4430-ba03-be56f03abdc6/globalmount"

bortek commented 3 years ago

Hi

I am experiencing this same issue. I have been upgrading CSI driver from 1.0.2 to 2.1.0. I deleted sts, ds, rbacs. Waited for the pods in kube-system to dissapear as the Upgrade guide sasys.

Then before applying new yaml files, one of my test pods restarted. This can happen in production too. We cannot guarantee that pods will not restart during this short interval between old drivers is removed and new drivers pods are in running.

It is logical that my pod would not be able to start since the driver is not there. So the container was in Creating state. I scaled it down.

Then I proceeded with the upgrade and applied 3 new configuration yaml files. After the CSI driver pods started I tried to scale up my test pod. But then I got this error.

MountVolume.SetUp failed for volume "pvc-91ba50d8-3e34-4f86-8017-6c186fcab0c2" : rpc error: code = FailedPrecondition desc = Volume ID: "aececd77-a023-461a-a63b-f29f06e61961" does not appear staged to "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-91ba50d8-3e34-4f86-8017-6c186fcab0c2/globalmount"

I then rebooted all the compute node one by one since I saw a similar Issue where they suggested to do so(https://github.com/digitalocean/csi-digitalocean/issues/32). My test pod restarted and I started to get this error

Generated from attachdetach-controller 740 times in the last hour AttachVolume.Attach failed for volume "pvc-91ba50d8-3e34-4f86-8017-6c186fcab0c2" : rpc error: code = Internal desc = failed to attach disk: "aececd77-a023-461a-a63b-f29f06e61961" with node: "compute-2.clusterx.internal.com" err failed to attach cns volume: "aececd77-a023-461a-a63b-f29f06e61961" to node vm: "VirtualMachine:vm-336 [VirtualCenterHost: 10.22.34.1, UUID: 423feb4c-f887-57c1-56fc-274046409b21, Datacenter: Datacenter [Datacenter: Datacenter:datacenter-21, VirtualCenterHost: 10.22.34.1]]". fault: "(*types.LocalizedMethodFault)(0xc001018b40)({\n DynamicData: (types.DynamicData) {\n },\n Fault: (types.CnsFault) {\n BaseMethodFault: (types.BaseMethodFault) ,\n Reason: (string) (len=79) \"CNS: The input volume aececd77-a023-461a-a63b-f29f06e61961 is not a CNS volume.\"\n },\n LocalizedMessage: (string) (len=95) \"CnsFault error: CNS: The input volume aececd77-a023-461a-a63b-f29f06e61961 is not a CNS volume.\"\n})\n". opId: "04a0c75d"

In combination with this one

Unable to attach or mount volumes: unmounted volumes=[test0001], unattached volumes=[default-token-pvl44 test0001]: timed out waiting for the condition

So I decided to let it go and self stabilize for 30 ms. When I can back the pods were all running.

Have you been able to reproduce it on your side?

bortek commented 3 years ago

I have done the upgrade test in another lab cluster and was not been able to see the same error.

Tested to restart pod between the old is removed and new driver is installed and those pods were stuck in CreatingContainer as expected. What helped fixing them was to scale down to 0 , wait couple of minutes and then scale up again.

mohideen commented 3 years ago

@RaunakShah I have not noticed this issue after I upgraded to v2.0.1/vCenter-7.0.

LinAnt commented 3 years ago

We get this every once in a while for a cronjob we are running. vSphere 6.7 u3 and 2.2.1 running on k8s 1.20

2021-06-22 15:08:19 
E0622 12:08:19.031139 3413570 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9 podName: nodeName:}" failed. No retries permitted until 2021-06-22 12:10:21.031106058 +0000 UTC m=+3557873.702055665 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"pvc-9dd26215-c898-4942-a9c5-8b218bd399e8\" (UniqueName: \"kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9\") pod \"importer-daily-1624359660-lqwsj\" (UID: \"4d4c5b33-3cc3-43b6-bab3-b333740b0fa9\") : rpc error: code = FailedPrecondition desc = Volume ID: \"73926cf3-8899-464e-a15f-25e9119fb1b9\" does not appear staged to \"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount\""
Show context
2021-06-22 15:08:19 
I0622 12:08:19.016485 3413570 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1624359660-lqwsj" (UID: "4d4c5b33-3cc3-43b6-bab3-b333740b0fa9") DevicePath "csi-3fbfa71984550731ed0b33a6863054fd0317e64677d964fd64e9a93f6ab414a8"
2021-06-22 15:08:19 
I0622 12:08:19.010003 3413570 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1624359660-lqwsj" (UID: "4d4c5b33-3cc3-43b6-bab3-b333740b0fa9") DevicePath "csi-3fbfa71984550731ed0b33a6863054fd0317e64677d964fd64e9a93f6ab414a8"
2021-06-22 15:08:19 
I0622 12:08:19.009673 3413570 reconciler.go:269] operationExecutor.MountVolume started for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1624359660-lqwsj" (UID: "4d4c5b33-3cc3-43b6-bab3-b333740b0fa9") 
2021-06-22 15:06:40 
E0622 12:06:40.093882 3413570 pod_workers.go:191] Error syncing pod 4d4c5b33-3cc3-43b6-bab3-b333740b0fa9 ("importer-daily-1624359660-lqwsj_importer-daily(4d4c5b33-3cc3-43b6-bab3-b333740b0fa9)"), skipping: unmounted volumes=[data], unattached volumes=[istiod-ca-cert vault-env data istio-podinfo importer-daily-token-jfc5c istio-envoy vault-tls istio-data istio-token]: timed out waiting for the condition
2021-06-22 15:06:40 
E0622 12:06:40.093849 3413570 kubelet.go:1684] Unable to attach or mount volumes for pod "importer-daily-1624359660-lqwsj_importer-daily(4d4c5b33-3cc3-43b6-bab3-b333740b0fa9)": unmounted volumes=[data], unattached volumes=[istiod-ca-cert vault-env data istio-podinfo importer-daily-token-jfc5c istio-envoy vault-tls istio-data istio-token]: timed out waiting for the condition; skipping pod
2021-06-22 15:06:16 
E0622 12:06:16.984234 3413570 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9 podName: nodeName:}" failed. No retries permitted until 2021-06-22 12:08:18.984193992 +0000 UTC m=+3557751.655143613 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"pvc-9dd26215-c898-4942-a9c5-8b218bd399e8\" (UniqueName: \"kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9\") pod \"importer-daily-1624359660-lqwsj\" (UID: \"4d4c5b33-3cc3-43b6-bab3-b333740b0fa9\") : rpc error: code = FailedPrecondition desc = Volume ID: \"73926cf3-8899-464e-a15f-25e9119fb1b9\" does not appear staged to \"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount\""
2021-06-22 15:06:16 
I0622 12:06:16.971916 3413570 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1624359660-lqwsj" (UID: "4d4c5b33-3cc3-43b6-bab3-b333740b0fa9") DevicePath "csi-3fbfa71984550731ed0b33a6863054fd0317e64677d964fd64e9a93f6ab414a8"
2021-06-22 15:06:16 
I0622 12:06:16.967594 3413570 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1624359660-lqwsj" (UID: "4d4c5b33-3cc3-43b6-bab3-b333740b0fa9") DevicePath "csi-3fbfa71984550731ed0b33a6863054fd0317e64677d964fd64e9a93f6ab414a8"
2021-06-22 15:06:16 
I0622 12:06:16.967230 3413570 reconciler.go:269] operationExecutor.MountVolume started for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1624359660-lqwsj" (UID: "4d4c5b33-3cc3-43b6-bab3-b333740b0fa9") 
2021-06-22 15:04:24 
E0622 12:04:24.095855 3413570 pod_workers.go:191] Error syncing pod 4d4c5b33-3cc3-43b6-bab3-b333740b0fa9 ("importer-daily-1624359660-lqwsj_importer-daily(4d4c5b33-3cc3-43b6-bab3-b333740b0fa9)"), skipping: unmounted volumes=[data], unattached volumes=[istio-data istio-envoy data importer-daily-token-jfc5c istio-podinfo vault-env istio-token vault-tls istiod-ca-cert]: timed out waiting for the condition
2021-06-22 15:04:24 
E0622 12:04:24.095818 3413570 kubelet.go:1684] Unable to attach or mount volumes for pod "importer-daily-1624359660-lqwsj_importer-daily(4d4c5b33-3cc3-43b6-bab3-b333740b0fa9)": unmounted volumes=[data], unattached volumes=[istio-data istio-envoy data importer-daily-token-jfc5c istio-podinfo vault-env istio-token vault-tls istiod-ca-cert]: timed out waiting for the condition; skipping pod
2021-06-22 15:04:14 
E0622 12:04:14.940721 3413570 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9 podName: nodeName:}" failed. No retries permitted until 2021-06-22 12:06:16.940687293 +0000 UTC m=+3557629.611636901 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"pvc-9dd26215-c898-4942-a9c5-8b218bd399e8\" (UniqueName: \"kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9\") pod \"importer-daily-1624359660-lqwsj\" (UID: \"4d4c5b33-3cc3-43b6-bab3-b333740b0fa9\") : rpc error: code = FailedPrecondition desc = Volume ID: \"73926cf3-8899-464e-a15f-25e9119fb1b9\" does not appear staged to \"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount\""
2021-06-22 15:04:14 
I0622 12:04:14.930271 3413570 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1624359660-lqwsj" (UID: "4d4c5b33-3cc3-43b6-bab3-b333740b0fa9") DevicePath "csi-3fbfa71984550731ed0b33a6863054fd0317e64677d964fd64e9a93f6ab414a8"
2021-06-22 15:04:14 
I0622 12:04:14.911959 3413570 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1624359660-lqwsj" (UID: "4d4c5b33-3cc3-43b6-bab3-b333740b0fa9") DevicePath "csi-3fbfa71984550731ed0b33a6863054fd0317e64677d964fd64e9a93f6ab414a8"
2021-06-22 15:04:14 
I0622 12:04:14.911518 3413570 reconciler.go:269] operationExecutor.MountVolume started for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1624359660-lqwsj" (UID: "4d4c5b33-3cc3-43b6-bab3-b333740b0fa9") 
RaunakShah commented 3 years ago

I still cant repro this issue but looking more at some of the code paths, i think this happens when kubernetes force detaches the volume. There's a 6 minute wait in the kubernetes code - https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/volume/attachdetach/reconciler/reconciler.go#L58

After which it force detaches the volume. I believe the issue is that unmount operation is still pending from the kubelet side. Can you verify the kubelet logs on the node where the pod was previously scheduled? I suspect NodeUnpublishVolume should continue to fail even though the volume was successfully detached.

LinAnt commented 3 years ago

Happened again just now:

logs:

2021-07-05 18:05:28 
I0705 15:05:28.023372  312924 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625493660-w27sl" (UID: "c02445fc-ee87-4087-ae68-843760816a57") DevicePath "csi-3fbfa71984550731ed0b33a6863054fd0317e64677d964fd64e9a93f6ab414a8"
Show context
2021-07-05 18:05:27 
I0705 15:05:27.936262  312924 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625493660-w27sl" (UID: "c02445fc-ee87-4087-ae68-843760816a57") DevicePath ""
2021-07-05 18:05:27 
I0705 15:05:27.935979  312924 reconciler.go:269] operationExecutor.MountVolume started for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625493660-w27sl" (UID: "c02445fc-ee87-4087-ae68-843760816a57") 
2021-07-05 18:05:27 
E0705 15:05:27.840032  312924 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9 podName:5b9e0247-c8b3-48da-becc-d24f76311a48 nodeName:}" failed. No retries permitted until 2021-07-05 15:05:28.339975056 +0000 UTC m=+13152.803052143 (durationBeforeRetry 500ms). Error: "UnmountVolume.TearDown failed for volume \"pvc-9dd26215-c898-4942-a9c5-8b218bd399e8\" (UniqueName: \"kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9\") pod \"5b9e0247-c8b3-48da-becc-d24f76311a48\" (UID: \"5b9e0247-c8b3-48da-becc-d24f76311a48\") : kubernetes.io/csi: mounter.TearDownAt failed: rpc error: code = Internal desc = error getting block device for volume: 73926cf3-8899-464e-a15f-25e9119fb1b9, err: lstat /dev/sdb: no such file or directory"
2021-07-05 18:05:27 
I0705 15:05:27.835466  312924 reconciler.go:196] operationExecutor.UnmountVolume started for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "5b9e0247-c8b3-48da-becc-d24f76311a48" (UID: "5b9e0247-c8b3-48da-becc-d24f76311a48") 
2021-07-05 18:05:27 
E0705 15:05:27.829419  312924 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9 podName: nodeName:}" failed. No retries permitted until 2021-07-05 15:05:28.329381379 +0000 UTC m=+13152.792458414 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"pvc-9dd26215-c898-4942-a9c5-8b218bd399e8\" (UniqueName: \"kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9\") pod \"importer-daily-1625493660-w27sl\" (UID: \"c02445fc-ee87-4087-ae68-843760816a57\") : rpc error: code = FailedPrecondition desc = Volume ID: \"73926cf3-8899-464e-a15f-25e9119fb1b9\" does not appear staged to \"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount\""
2021-07-05 18:05:27 
I0705 15:05:27.622965  312924 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625493660-w27sl" (UID: "c02445fc-ee87-4087-ae68-843760816a57") DevicePath "csi-3fbfa71984550731ed0b33a6863054fd0317e64677d964fd64e9a93f6ab414a8"
2021-07-05 18:05:27 
I0705 15:05:27.534625  312924 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625493660-w27sl" (UID: "c02445fc-ee87-4087-ae68-843760816a57") DevicePath ""
2021-07-05 18:05:27 
I0705 15:05:27.534107  312924 reconciler.go:269] operationExecutor.MountVolume started for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625493660-w27sl" (UID: "c02445fc-ee87-4087-ae68-843760816a57") 
2021-07-05 18:05:27 
E0705 15:05:27.438160  312924 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9 podName:5b9e0247-c8b3-48da-becc-d24f76311a48 nodeName:}" failed. No retries permitted until 2021-07-05 15:05:27.938121202 +0000 UTC m=+13152.401198239 (durationBeforeRetry 500ms). Error: "UnmountVolume.TearDown failed for volume \"pvc-9dd26215-c898-4942-a9c5-8b218bd399e8\" (UniqueName: \"kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9\") pod \"5b9e0247-c8b3-48da-becc-d24f76311a48\" (UID: \"5b9e0247-c8b3-48da-becc-d24f76311a48\") : kubernetes.io/csi: mounter.TearDownAt failed: rpc error: code = Internal desc = error getting block device for volume: 73926cf3-8899-464e-a15f-25e9119fb1b9, err: lstat /dev/sdb: no such file or directory"
2021-07-05 18:05:27 
I0705 15:05:27.433435  312924 reconciler.go:196] operationExecutor.UnmountVolume started for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "5b9e0247-c8b3-48da-becc-d24f76311a48" (UID: "5b9e0247-c8b3-48da-becc-d24f76311a48") 
2021-07-05 18:05:27 
E0705 15:05:27.427867  312924 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9 podName: nodeName:}" failed. No retries permitted until 2021-07-05 15:05:27.927822517 +0000 UTC m=+13152.390899560 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"pvc-9dd26215-c898-4942-a9c5-8b218bd399e8\" (UniqueName: \"kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9\") pod \"importer-daily-1625493660-w27sl\" (UID: \"c02445fc-ee87-4087-ae68-843760816a57\") : rpc error: code = FailedPrecondition desc = Volume ID: \"73926cf3-8899-464e-a15f-25e9119fb1b9\" does not appear staged to \"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount\""
2021-07-05 18:05:27 
I0705 15:05:27.222893  312924 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625493660-w27sl" (UID: "c02445fc-ee87-4087-ae68-843760816a57") DevicePath "csi-3fbfa71984550731ed0b33a6863054fd0317e64677d964fd64e9a93f6ab414a8"
2021-07-05 18:05:27 
I0705 15:05:27.132872  312924 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625493660-w27sl" (UID: "c02445fc-ee87-4087-ae68-843760816a57") DevicePath ""
2021-07-05 18:05:27 
I0705 15:05:27.132554  312924 reconciler.go:269] operationExecutor.MountVolume started for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625493660-w27sl" (UID: "c02445fc-ee87-4087-ae68-843760816a57") 
2021-07-05 18:05:27 
E0705 15:05:27.036358  312924 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9 podName:5b9e0247-c8b3-48da-becc-d24f76311a48 nodeName:}" failed. No retries permitted until 2021-07-05 15:05:27.536329085 +0000 UTC m=+13151.999406134 (durationBeforeRetry 500ms). Error: "UnmountVolume.TearDown failed for volume \"pvc-9dd26215-c898-4942-a9c5-8b218bd399e8\" (UniqueName: \"kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9\") pod \"5b9e0247-c8b3-48da-becc-d24f76311a48\" (UID: \"5b9e0247-c8b3-48da-becc-d24f76311a48\") : kubernetes.io/csi: mounter.TearDownAt failed: rpc error: code = Internal desc = error getting block device for volume: 73926cf3-8899-464e-a15f-25e9119fb1b9, err: lstat /dev/sdb: no such file or directory"
2021-07-05 18:05:27 
I0705 15:05:27.031840  312924 reconciler.go:196] operationExecutor.UnmountVolume started for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "5b9e0247-c8b3-48da-becc-d24f76311a48" (UID: "5b9e0247-c8b3-48da-becc-d24f76311a48") 
2021-07-05 18:05:27 
E0705 15:05:27.026737  312924 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9 podName: nodeName:}" failed. No retries permitted until 2021-07-05 15:05:27.526689585 +0000 UTC m=+13151.989766628 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"pvc-9dd26215-c898-4942-a9c5-8b218bd399e8\" (UniqueName: \"kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9\") pod \"importer-daily-1625493660-w27sl\" (UID: \"c02445fc-ee87-4087-ae68-843760816a57\") : rpc error: code = FailedPrecondition desc = Volume ID: \"73926cf3-8899-464e-a15f-25e9119fb1b9\" does not appear staged to \"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount\""
2021-07-05 18:05:26 
I0705 15:05:26.824742  312924 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625493660-w27sl" (UID: "c02445fc-ee87-4087-ae68-843760816a57") DevicePath "csi-3fbfa71984550731ed0b33a6863054fd0317e64677d964fd64e9a93f6ab414a8"
2021-07-05 18:05:26 
I0705 15:05:26.730642  312924 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625493660-w27sl" (UID: "c02445fc-ee87-4087-ae68-843760816a57") DevicePath ""
2021-07-05 18:05:26 
I0705 15:05:26.730437  312924 reconciler.go:269] operationExecutor.MountVolume started for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625493660-w27sl" (UID: "c02445fc-ee87-4087-ae68-843760816a57") 

Event from k8s:


MountVolume.SetUp failed for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" : rpc error: code = FailedPrecondition desc = Volume ID: "73926cf3-8899-464e-a15f-25e9119fb1b9" does not appear staged to "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount"```
LinAnt commented 3 years ago

Pod had been initializing over an hour due to volumes not being mounted correctly.

LinAnt commented 3 years ago

Logs from the worker node where the job succeeded earlier:

2021-07-05 14:02:15 
I0705 11:02:15.184802 2312548 reconciler.go:319] Volume detached for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") on node "prod-us-worker7" DevicePath "csi-31fd1cfccdcd792090abfb1c4ab05dd83673b5c7cd807ca4f2983a33d1e548f0"
Show context
2021-07-05 14:02:15 
I0705 11:02:15.114267 2312548 operation_generator.go:891] UnmountDevice succeeded for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" %!(EXTRA string=UnmountDevice succeeded for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") on node "prod-us-worker7" )
2021-07-05 14:02:15 
I0705 11:02:15.084273 2312548 reconciler.go:312] operationExecutor.UnmountDevice started for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") on node "prod-us-worker7" 
2021-07-05 14:02:14 
I0705 11:02:14.994948 2312548 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9" (OuterVolumeSpecName: "data") pod "167cedc4-30dc-4da4-832e-5b0691020f06" (UID: "167cedc4-30dc-4da4-832e-5b0691020f06"). InnerVolumeSpecName "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8". PluginName "kubernetes.io/csi", VolumeGidValue ""
2021-07-05 14:01:20 
I0705 11:01:20.666526 2312548 operation_generator.go:672] MountVolume.SetUp succeeded for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625482860-5xwjc" (UID: "167cedc4-30dc-4da4-832e-5b0691020f06") 
2021-07-05 14:01:20 
I0705 11:01:20.642907 2312548 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625482860-5xwjc" (UID: "167cedc4-30dc-4da4-832e-5b0691020f06") device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount"
2021-07-05 14:01:20 
I0705 11:01:20.546252 2312548 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625482860-5xwjc" (UID: "167cedc4-30dc-4da4-832e-5b0691020f06") DevicePath "csi-31fd1cfccdcd792090abfb1c4ab05dd83673b5c7cd807ca4f2983a33d1e548f0"
2021-07-05 14:01:20 
I0705 11:01:20.535964 2312548 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625482860-5xwjc" (UID: "167cedc4-30dc-4da4-832e-5b0691020f06") DevicePath ""
2021-07-05 14:01:20 
I0705 11:01:20.535602 2312548 reconciler.go:269] operationExecutor.MountVolume started for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625482860-5xwjc" (UID: "167cedc4-30dc-4da4-832e-5b0691020f06") 
2021-07-05 14:01:20 
I0705 11:01:20.444886 2312548 operation_generator.go:1357] Controller attach succeeded for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625482860-5xwjc" (UID: "167cedc4-30dc-4da4-832e-5b0691020f06") device path: ""
2021-07-05 14:01:20 
I0705 11:01:20.435046 2312548 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625482860-5xwjc" (UID: "167cedc4-30dc-4da4-832e-5b0691020f06") 
2021-07-05 14:01:12 
E0705 11:01:12.393093 2312548 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9 podName: nodeName:}" failed. No retries permitted until 2021-07-05 11:01:20.393037597 +0000 UTC m=+4677191.073798279 (durationBeforeRetry 8s). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-9dd26215-c898-4942-a9c5-8b218bd399e8\" (UniqueName: \"kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9\") pod \"importer-daily-1625482860-5xwjc\" (UID: \"167cedc4-30dc-4da4-832e-5b0691020f06\") "
2021-07-05 14:01:12 
I0705 11:01:12.392951 2312548 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625482860-5xwjc" (UID: "167cedc4-30dc-4da4-832e-5b0691020f06") 
2021-07-05 14:01:08 
E0705 11:01:08.368490 2312548 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9 podName: nodeName:}" failed. No retries permitted until 2021-07-05 11:01:12.368426254 +0000 UTC m=+4677183.049186923 (durationBeforeRetry 4s). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-9dd26215-c898-4942-a9c5-8b218bd399e8\" (UniqueName: \"kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9\") pod \"importer-daily-1625482860-5xwjc\" (UID: \"167cedc4-30dc-4da4-832e-5b0691020f06\") "
2021-07-05 14:01:08 
I0705 11:01:08.368313 2312548 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625482860-5xwjc" (UID: "167cedc4-30dc-4da4-832e-5b0691020f06") 
2021-07-05 14:01:06 
E0705 11:01:06.354024 2312548 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9 podName: nodeName:}" failed. No retries permitted until 2021-07-05 11:01:08.353958892 +0000 UTC m=+4677179.034719589 (durationBeforeRetry 2s). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-9dd26215-c898-4942-a9c5-8b218bd399e8\" (UniqueName: \"kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9\") pod \"importer-daily-1625482860-5xwjc\" (UID: \"167cedc4-30dc-4da4-832e-5b0691020f06\") "
2021-07-05 14:01:06 
I0705 11:01:06.353845 2312548 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625482860-5xwjc" (UID: "167cedc4-30dc-4da4-832e-5b0691020f06") 
2021-07-05 14:01:05 
E0705 11:01:05.347497 2312548 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9 podName: nodeName:}" failed. No retries permitted until 2021-07-05 11:01:06.347439046 +0000 UTC m=+4677177.028199752 (durationBeforeRetry 1s). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-9dd26215-c898-4942-a9c5-8b218bd399e8\" (UniqueName: \"kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9\") pod \"importer-daily-1625482860-5xwjc\" (UID: \"167cedc4-30dc-4da4-832e-5b0691020f06\") "
2021-07-05 14:01:05 
I0705 11:01:05.347328 2312548 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625482860-5xwjc" (UID: "167cedc4-30dc-4da4-832e-5b0691020f06") 
2021-07-05 14:01:04 
E0705 11:01:04.843484 2312548 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9 podName: nodeName:}" failed. No retries permitted until 2021-07-05 11:01:05.343431864 +0000 UTC m=+4677176.024192545 (durationBeforeRetry 500ms). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-9dd26215-c898-4942-a9c5-8b218bd399e8\" (UniqueName: \"kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9\") pod \"importer-daily-1625482860-5xwjc\" (UID: \"167cedc4-30dc-4da4-832e-5b0691020f06\") "
2021-07-05 14:01:04 
I0705 11:01:04.843367 2312548 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625482860-5xwjc" (UID: "167cedc4-30dc-4da4-832e-5b0691020f06") 
2021-07-05 11:02:01 
I0705 08:02:01.434030 2312548 reconciler.go:319] Volume detached for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") on node "prod-us-worker7" DevicePath "csi-31fd1cfccdcd792090abfb1c4ab05dd83673b5c7cd807ca4f2983a33d1e548f0"
2021-07-05 11:02:01 
I0705 08:02:01.382089 2312548 operation_generator.go:891] UnmountDevice succeeded for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" %!(EXTRA string=UnmountDevice succeeded for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") on node "prod-us-worker7" )
2021-07-05 11:02:01 
I0705 08:02:01.333579 2312548 reconciler.go:312] operationExecutor.UnmountDevice started for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") on node "prod-us-worker7" 
2021-07-05 11:02:01 
I0705 08:02:01.252063 2312548 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9" (OuterVolumeSpecName: "data") pod "01bfff51-7a2d-42b8-a2ac-11e5981e334d" (UID: "01bfff51-7a2d-42b8-a2ac-11e5981e334d"). InnerVolumeSpecName "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8". PluginName "kubernetes.io/csi", VolumeGidValue ""
2021-07-05 11:01:10 
I0705 08:01:10.006902 2312548 operation_generator.go:672] MountVolume.SetUp succeeded for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625472060-lw2h8" (UID: "01bfff51-7a2d-42b8-a2ac-11e5981e334d") 
2021-07-05 11:01:09 
I0705 08:01:09.982126 2312548 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625472060-lw2h8" (UID: "01bfff51-7a2d-42b8-a2ac-11e5981e334d") device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount"
2021-07-05 11:01:09 
I0705 08:01:09.890164 2312548 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625472060-lw2h8" (UID: "01bfff51-7a2d-42b8-a2ac-11e5981e334d") DevicePath "csi-31fd1cfccdcd792090abfb1c4ab05dd83673b5c7cd807ca4f2983a33d1e548f0"
2021-07-05 11:01:09 
I0705 08:01:09.881242 2312548 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625472060-lw2h8" (UID: "01bfff51-7a2d-42b8-a2ac-11e5981e334d") DevicePath ""
2021-07-05 11:01:09 
I0705 08:01:09.880916 2312548 reconciler.go:269] operationExecutor.MountVolume started for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625472060-lw2h8" (UID: "01bfff51-7a2d-42b8-a2ac-11e5981e334d") 
2021-07-05 11:01:09 
I0705 08:01:09.787425 2312548 operation_generator.go:1357] Controller attach succeeded for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625472060-lw2h8" (UID: "01bfff51-7a2d-42b8-a2ac-11e5981e334d") device path: ""
2021-07-05 11:01:09 
I0705 08:01:09.780430 2312548 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-9dd26215-c898-4942-a9c5-8b218bd399e8" (UniqueName: "kubernetes.io/csi/csi.vsphere.vmware.com^73926cf3-8899-464e-a15f-25e9119fb1b9") pod "importer-daily-1625472060-lw2h8" (UID: "01bfff51-7a2d-42b8-a2ac-11e5981e334d") 
LinAnt commented 3 years ago

Seems a quick fix for when this happens is to cordon the worker where the workload fails to start. Redeploy the workload and then delete the volumeattachment.

LinAnt commented 3 years ago

One thing a team member mentioned today was that this error only seems to affect certain worker nodes 🤔

RaunakShah commented 3 years ago

@LinAnt can you also collect the vsphere-csi-node logs on this worker node? Btw, are you attempting to mount a raw block volume?

LinAnt commented 3 years ago
2021-07-05 18:05:25 | time="2021-07-05T15:05:25Z" level=debug msg="/csi.v1.Node/NodeGetCapabilities: REQ 31276: XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
-- | --
  |   | 2021-07-05 18:05:25 | time="2021-07-05T15:05:25Z" level=debug msg="/csi.v1.Node/NodeUnpublishVolume: REP 31275: rpc error: code = Internal desc = error getting block device for volume: 73926cf3-8899-464e-a15f-25e9119fb1b9, err: lstat /dev/sdb: no such file or directory"
  |   | 2021-07-05 18:05:25 | {"level":"info","time":"2021-07-05T15:05:25.525579265Z","caller":"service/node.go:404","msg":"NodeUnpublishVolume: called with args {VolumeId:73926cf3-8899-464e-a15f-25e9119fb1b9 TargetPath:/var/lib/kubelet/pods/5b9e0247-c8b3-48da-becc-d24f76311a48/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}","TraceId":"440b4f4d-c992-4995-8d04-c983f5d39dcf"}
  |   | 2021-07-05 18:05:25 | time="2021-07-05T15:05:25Z" level=debug msg="/csi.v1.Node/NodeUnpublishVolume: REQ 31275: VolumeId=73926cf3-8899-464e-a15f-25e9119fb1b9, TargetPath=/var/lib/kubelet/pods/5b9e0247-c8b3-48da-becc-d24f76311a48/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount, XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:25 | time="2021-07-05T15:05:25Z" level=debug msg="/csi.v1.Node/NodePublishVolume: REP 31274: rpc error: code = FailedPrecondition desc = Volume ID: \"73926cf3-8899-464e-a15f-25e9119fb1b9\" does not appear staged to \"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount\""
  |   | 2021-07-05 18:05:25 | {"level":"info","time":"2021-07-05T15:05:25.425933032Z","caller":"service/node.go:1234","msg":"created directory","TraceId":"cf1d5884-9e68-459f-8da2-69609928f801"}
  |   | 2021-07-05 18:05:25 | {"level":"info","time":"2021-07-05T15:05:25.425831312Z","caller":"service/node.go:1226","msg":"creating directory :\"/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount\"","TraceId":"cf1d5884-9e68-459f-8da2-69609928f801"}
  |   | 2021-07-05 18:05:25 | {"level":"info","time":"2021-07-05T15:05:25.425767762Z","caller":"service/node.go:853","msg":"PublishMountVolume called with args: {volID:73926cf3-8899-464e-a15f-25e9119fb1b9 target:/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount stagingTarget:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount diskID:6000c29f02c8986f7be8ff76a535de90 volumePath:/dev/disk/by-id/wwn-0x6000c29f02c8986f7be8ff76a535de90 device:/dev/sdc ro:false}","TraceId":"cf1d5884-9e68-459f-8da2-69609928f801"}
  |   | 2021-07-05 18:05:25 | {"level":"info","time":"2021-07-05T15:05:25.425559863Z","caller":"service/node.go:346","msg":"NodePublishVolume: called with args {VolumeId:73926cf3-8899-464e-a15f-25e9119fb1b9 PublishContext:map[diskUUID:6000c29f02c8986f7be8ff76a535de90 type:vSphere CNS Block Volume] StagingTargetPath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount TargetPath:/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount VolumeCapability:mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1612496799168-8081-csi.vsphere.vmware.com type:vSphere CNS Block Volume] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}","TraceId":"cf1d5884-9e68-459f-8da2-69609928f801"}
  |   | 2021-07-05 18:05:25 | time="2021-07-05T15:05:25Z" level=debug msg="/csi.v1.Node/NodePublishVolume: REQ 31274: VolumeId=73926cf3-8899-464e-a15f-25e9119fb1b9, PublishContext=map[diskUUID:6000c29f02c8986f7be8ff76a535de90 type:vSphere CNS Block Volume], StagingTargetPath=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount, TargetPath=/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount, VolumeCapability=mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER > , Readonly=false, VolumeContext=map[storage.kubernetes.io/csiProvisionerIdentity:1612496799168-8081-csi.vsphere.vmware.com type:vSphere CNS Block Volume], XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:25 | time="2021-07-05T15:05:25Z" level=debug msg="/csi.v1.Node/NodeGetCapabilities: REP 31273: Capabilities=[rpc:<type:STAGE_UNSTAGE_VOLUME >  rpc:<type:EXPAND_VOLUME >  rpc:<type:GET_VOLUME_STATS > ], XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:25 | time="2021-07-05T15:05:25Z" level=debug msg="/csi.v1.Node/NodeGetCapabilities: REQ 31273: XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:25 | time="2021-07-05T15:05:25Z" level=debug msg="/csi.v1.Node/NodeUnpublishVolume: REP 31272: rpc error: code = Internal desc = error getting block device for volume: 73926cf3-8899-464e-a15f-25e9119fb1b9, err: lstat /dev/sdb: no such file or directory"
  |   | 2021-07-05 18:05:25 | {"level":"info","time":"2021-07-05T15:05:25.123342431Z","caller":"service/node.go:404","msg":"NodeUnpublishVolume: called with args {VolumeId:73926cf3-8899-464e-a15f-25e9119fb1b9 TargetPath:/var/lib/kubelet/pods/5b9e0247-c8b3-48da-becc-d24f76311a48/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}","TraceId":"d8d79e1e-5427-4ed2-90ea-c65c3ea3f265"}
  |   | 2021-07-05 18:05:25 | time="2021-07-05T15:05:25Z" level=debug msg="/csi.v1.Node/NodeUnpublishVolume: REQ 31272: VolumeId=73926cf3-8899-464e-a15f-25e9119fb1b9, TargetPath=/var/lib/kubelet/pods/5b9e0247-c8b3-48da-becc-d24f76311a48/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount, XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:25 | time="2021-07-05T15:05:25Z" level=debug msg="/csi.v1.Node/NodePublishVolume: REP 31271: rpc error: code = FailedPrecondition desc = Volume ID: \"73926cf3-8899-464e-a15f-25e9119fb1b9\" does not appear staged to \"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount\""
  |   | 2021-07-05 18:05:25 | {"level":"info","time":"2021-07-05T15:05:25.027953945Z","caller":"service/node.go:1234","msg":"created directory","TraceId":"03b5855c-fb97-40fe-bff2-27880a79eee3"}
  |   | 2021-07-05 18:05:25 | {"level":"info","time":"2021-07-05T15:05:25.027744592Z","caller":"service/node.go:1226","msg":"creating directory :\"/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount\"","TraceId":"03b5855c-fb97-40fe-bff2-27880a79eee3"}
  |   | 2021-07-05 18:05:25 | {"level":"info","time":"2021-07-05T15:05:25.027647376Z","caller":"service/node.go:853","msg":"PublishMountVolume called with args: {volID:73926cf3-8899-464e-a15f-25e9119fb1b9 target:/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount stagingTarget:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount diskID:6000c29f02c8986f7be8ff76a535de90 volumePath:/dev/disk/by-id/wwn-0x6000c29f02c8986f7be8ff76a535de90 device:/dev/sdc ro:false}","TraceId":"03b5855c-fb97-40fe-bff2-27880a79eee3"}
  |   | 2021-07-05 18:05:25 | {"level":"info","time":"2021-07-05T15:05:25.02726994Z","caller":"service/node.go:346","msg":"NodePublishVolume: called with args {VolumeId:73926cf3-8899-464e-a15f-25e9119fb1b9 PublishContext:map[diskUUID:6000c29f02c8986f7be8ff76a535de90 type:vSphere CNS Block Volume] StagingTargetPath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount TargetPath:/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount VolumeCapability:mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1612496799168-8081-csi.vsphere.vmware.com type:vSphere CNS Block Volume] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}","TraceId":"03b5855c-fb97-40fe-bff2-27880a79eee3"}
  |   | 2021-07-05 18:05:25 | time="2021-07-05T15:05:25Z" level=debug msg="/csi.v1.Node/NodePublishVolume: REQ 31271: VolumeId=73926cf3-8899-464e-a15f-25e9119fb1b9, PublishContext=map[diskUUID:6000c29f02c8986f7be8ff76a535de90 type:vSphere CNS Block Volume], StagingTargetPath=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount, TargetPath=/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount, VolumeCapability=mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER > , Readonly=false, VolumeContext=map[storage.kubernetes.io/csiProvisionerIdentity:1612496799168-8081-csi.vsphere.vmware.com type:vSphere CNS Block Volume], XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:24 | time="2021-07-05T15:05:24Z" level=debug msg="/csi.v1.Node/NodeGetCapabilities: REP 31270: Capabilities=[rpc:<type:STAGE_UNSTAGE_VOLUME >  rpc:<type:EXPAND_VOLUME >  rpc:<type:GET_VOLUME_STATS > ], XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:24 | time="2021-07-05T15:05:24Z" level=debug msg="/csi.v1.Node/NodeGetCapabilities: REQ 31270: XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:24 | time="2021-07-05T15:05:24Z" level=debug msg="/csi.v1.Node/NodeUnpublishVolume: REP 31269: rpc error: code = Internal desc = error getting block device for volume: 73926cf3-8899-464e-a15f-25e9119fb1b9, err: lstat /dev/sdb: no such file or directory"
  |   | 2021-07-05 18:05:24 | {"level":"info","time":"2021-07-05T15:05:24.721334367Z","caller":"service/node.go:404","msg":"NodeUnpublishVolume: called with args {VolumeId:73926cf3-8899-464e-a15f-25e9119fb1b9 TargetPath:/var/lib/kubelet/pods/5b9e0247-c8b3-48da-becc-d24f76311a48/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}","TraceId":"02c78648-9252-49b4-9617-33e205454839"}
  |   | 2021-07-05 18:05:24 | time="2021-07-05T15:05:24Z" level=debug msg="/csi.v1.Node/NodeUnpublishVolume: REQ 31269: VolumeId=73926cf3-8899-464e-a15f-25e9119fb1b9, TargetPath=/var/lib/kubelet/pods/5b9e0247-c8b3-48da-becc-d24f76311a48/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount, XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:24 | time="2021-07-05T15:05:24Z" level=debug msg="/csi.v1.Node/NodePublishVolume: REP 31268: rpc error: code = FailedPrecondition desc = Volume ID: \"73926cf3-8899-464e-a15f-25e9119fb1b9\" does not appear staged to \"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount\""
  |   | 2021-07-05 18:05:24 | {"level":"info","time":"2021-07-05T15:05:24.625139288Z","caller":"service/node.go:1234","msg":"created directory","TraceId":"a8da2404-fbe7-42b9-ac78-354798ad7f62"}
  |   | 2021-07-05 18:05:24 | {"level":"info","time":"2021-07-05T15:05:24.625073411Z","caller":"service/node.go:1226","msg":"creating directory :\"/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount\"","TraceId":"a8da2404-fbe7-42b9-ac78-354798ad7f62"}
  |   | 2021-07-05 18:05:24 | {"level":"info","time":"2021-07-05T15:05:24.625055263Z","caller":"service/node.go:853","msg":"PublishMountVolume called with args: {volID:73926cf3-8899-464e-a15f-25e9119fb1b9 target:/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount stagingTarget:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount diskID:6000c29f02c8986f7be8ff76a535de90 volumePath:/dev/disk/by-id/wwn-0x6000c29f02c8986f7be8ff76a535de90 device:/dev/sdc ro:false}","TraceId":"a8da2404-fbe7-42b9-ac78-354798ad7f62"}
  |   | 2021-07-05 18:05:24 | {"level":"info","time":"2021-07-05T15:05:24.624926383Z","caller":"service/node.go:346","msg":"NodePublishVolume: called with args {VolumeId:73926cf3-8899-464e-a15f-25e9119fb1b9 PublishContext:map[diskUUID:6000c29f02c8986f7be8ff76a535de90 type:vSphere CNS Block Volume] StagingTargetPath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount TargetPath:/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount VolumeCapability:mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1612496799168-8081-csi.vsphere.vmware.com type:vSphere CNS Block Volume] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}","TraceId":"a8da2404-fbe7-42b9-ac78-354798ad7f62"}
  |   | 2021-07-05 18:05:24 | time="2021-07-05T15:05:24Z" level=debug msg="/csi.v1.Node/NodePublishVolume: REQ 31268: VolumeId=73926cf3-8899-464e-a15f-25e9119fb1b9, PublishContext=map[diskUUID:6000c29f02c8986f7be8ff76a535de90 type:vSphere CNS Block Volume], StagingTargetPath=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount, TargetPath=/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount, VolumeCapability=mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER > , Readonly=false, VolumeContext=map[storage.kubernetes.io/csiProvisionerIdentity:1612496799168-8081-csi.vsphere.vmware.com type:vSphere CNS Block Volume], XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:24 | time="2021-07-05T15:05:24Z" level=debug msg="/csi.v1.Node/NodeGetCapabilities: REP 31267: Capabilities=[rpc:<type:STAGE_UNSTAGE_VOLUME >  rpc:<type:EXPAND_VOLUME >  rpc:<type:GET_VOLUME_STATS > ], XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:24 | time="2021-07-05T15:05:24Z" level=debug msg="/csi.v1.Node/NodeGetCapabilities: REQ 31267: XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:24 | time="2021-07-05T15:05:24Z" level=debug msg="/csi.v1.Node/NodeUnpublishVolume: REP 31266: rpc error: code = Internal desc = error getting block device for volume: 73926cf3-8899-464e-a15f-25e9119fb1b9, err: lstat /dev/sdb: no such file or directory"
  |   | 2021-07-05 18:05:24 | {"level":"info","time":"2021-07-05T15:05:24.319746324Z","caller":"service/node.go:404","msg":"NodeUnpublishVolume: called with args {VolumeId:73926cf3-8899-464e-a15f-25e9119fb1b9 TargetPath:/var/lib/kubelet/pods/5b9e0247-c8b3-48da-becc-d24f76311a48/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}","TraceId":"00be3175-df98-4be2-af63-b4add3ce33b7"}
  |   | 2021-07-05 18:05:24 | time="2021-07-05T15:05:24Z" level=debug msg="/csi.v1.Node/NodeUnpublishVolume: REQ 31266: VolumeId=73926cf3-8899-464e-a15f-25e9119fb1b9, TargetPath=/var/lib/kubelet/pods/5b9e0247-c8b3-48da-becc-d24f76311a48/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount, XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:24 | time="2021-07-05T15:05:24Z" level=debug msg="/csi.v1.Node/NodePublishVolume: REP 31265: rpc error: code = FailedPrecondition desc = Volume ID: \"73926cf3-8899-464e-a15f-25e9119fb1b9\" does not appear staged to \"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount\""
  |   | 2021-07-05 18:05:24 | {"level":"info","time":"2021-07-05T15:05:24.226424743Z","caller":"service/node.go:1234","msg":"created directory","TraceId":"68a029f5-f154-4690-b060-9357d5186438"}
  |   | 2021-07-05 18:05:24 | {"level":"info","time":"2021-07-05T15:05:24.226359618Z","caller":"service/node.go:1226","msg":"creating directory :\"/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount\"","TraceId":"68a029f5-f154-4690-b060-9357d5186438"}
  |   | 2021-07-05 18:05:24 | {"level":"info","time":"2021-07-05T15:05:24.226337914Z","caller":"service/node.go:853","msg":"PublishMountVolume called with args: {volID:73926cf3-8899-464e-a15f-25e9119fb1b9 target:/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount stagingTarget:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount diskID:6000c29f02c8986f7be8ff76a535de90 volumePath:/dev/disk/by-id/wwn-0x6000c29f02c8986f7be8ff76a535de90 device:/dev/sdc ro:false}","TraceId":"68a029f5-f154-4690-b060-9357d5186438"}
  |   | 2021-07-05 18:05:24 | {"level":"info","time":"2021-07-05T15:05:24.226161338Z","caller":"service/node.go:346","msg":"NodePublishVolume: called with args {VolumeId:73926cf3-8899-464e-a15f-25e9119fb1b9 PublishContext:map[diskUUID:6000c29f02c8986f7be8ff76a535de90 type:vSphere CNS Block Volume] StagingTargetPath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount TargetPath:/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount VolumeCapability:mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1612496799168-8081-csi.vsphere.vmware.com type:vSphere CNS Block Volume] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}","TraceId":"68a029f5-f154-4690-b060-9357d5186438"}
  |   | 2021-07-05 18:05:24 | time="2021-07-05T15:05:24Z" level=debug msg="/csi.v1.Node/NodePublishVolume: REQ 31265: VolumeId=73926cf3-8899-464e-a15f-25e9119fb1b9, PublishContext=map[diskUUID:6000c29f02c8986f7be8ff76a535de90 type:vSphere CNS Block Volume], StagingTargetPath=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount, TargetPath=/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount, VolumeCapability=mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER > , Readonly=false, VolumeContext=map[storage.kubernetes.io/csiProvisionerIdentity:1612496799168-8081-csi.vsphere.vmware.com type:vSphere CNS Block Volume], XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:24 | time="2021-07-05T15:05:24Z" level=debug msg="/csi.v1.Node/NodeGetCapabilities: REP 31264: Capabilities=[rpc:<type:STAGE_UNSTAGE_VOLUME >  rpc:<type:EXPAND_VOLUME >  rpc:<type:GET_VOLUME_STATS > ], XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:24 | time="2021-07-05T15:05:24Z" level=debug msg="/csi.v1.Node/NodeGetCapabilities: REQ 31264: XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:23 | time="2021-07-05T15:05:23Z" level=debug msg="/csi.v1.Node/NodeUnpublishVolume: REP 31263: rpc error: code = Internal desc = error getting block device for volume: 73926cf3-8899-464e-a15f-25e9119fb1b9, err: lstat /dev/sdb: no such file or directory"
  |   | 2021-07-05 18:05:23 | {"level":"info","time":"2021-07-05T15:05:23.917792313Z","caller":"service/node.go:404","msg":"NodeUnpublishVolume: called with args {VolumeId:73926cf3-8899-464e-a15f-25e9119fb1b9 TargetPath:/var/lib/kubelet/pods/5b9e0247-c8b3-48da-becc-d24f76311a48/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}","TraceId":"3ccc350e-8ce3-4248-8d81-044500c1e1fa"}
  |   | 2021-07-05 18:05:23 | time="2021-07-05T15:05:23Z" level=debug msg="/csi.v1.Node/NodeUnpublishVolume: REQ 31263: VolumeId=73926cf3-8899-464e-a15f-25e9119fb1b9, TargetPath=/var/lib/kubelet/pods/5b9e0247-c8b3-48da-becc-d24f76311a48/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount, XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:23 | time="2021-07-05T15:05:23Z" level=debug msg="/csi.v1.Node/NodePublishVolume: REP 31262: rpc error: code = FailedPrecondition desc = Volume ID: \"73926cf3-8899-464e-a15f-25e9119fb1b9\" does not appear staged to \"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount\""
  |   | 2021-07-05 18:05:23 | {"level":"info","time":"2021-07-05T15:05:23.824970038Z","caller":"service/node.go:1234","msg":"created directory","TraceId":"3288dc93-49ac-47d6-be14-af96a20a0022"}
  |   | 2021-07-05 18:05:23 | {"level":"info","time":"2021-07-05T15:05:23.824892781Z","caller":"service/node.go:1226","msg":"creating directory :\"/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount\"","TraceId":"3288dc93-49ac-47d6-be14-af96a20a0022"}
  |   | 2021-07-05 18:05:23 | {"level":"info","time":"2021-07-05T15:05:23.824865466Z","caller":"service/node.go:853","msg":"PublishMountVolume called with args: {volID:73926cf3-8899-464e-a15f-25e9119fb1b9 target:/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount stagingTarget:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount diskID:6000c29f02c8986f7be8ff76a535de90 volumePath:/dev/disk/by-id/wwn-0x6000c29f02c8986f7be8ff76a535de90 device:/dev/sdc ro:false}","TraceId":"3288dc93-49ac-47d6-be14-af96a20a0022"}
  |   | 2021-07-05 18:05:23 | {"level":"info","time":"2021-07-05T15:05:23.824707293Z","caller":"service/node.go:346","msg":"NodePublishVolume: called with args {VolumeId:73926cf3-8899-464e-a15f-25e9119fb1b9 PublishContext:map[diskUUID:6000c29f02c8986f7be8ff76a535de90 type:vSphere CNS Block Volume] StagingTargetPath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount TargetPath:/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount VolumeCapability:mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1612496799168-8081-csi.vsphere.vmware.com type:vSphere CNS Block Volume] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}","TraceId":"3288dc93-49ac-47d6-be14-af96a20a0022"}
  |   | 2021-07-05 18:05:23 | time="2021-07-05T15:05:23Z" level=debug msg="/csi.v1.Node/NodePublishVolume: REQ 31262: VolumeId=73926cf3-8899-464e-a15f-25e9119fb1b9, PublishContext=map[diskUUID:6000c29f02c8986f7be8ff76a535de90 type:vSphere CNS Block Volume], StagingTargetPath=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount, TargetPath=/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount, VolumeCapability=mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER > , Readonly=false, VolumeContext=map[storage.kubernetes.io/csiProvisionerIdentity:1612496799168-8081-csi.vsphere.vmware.com type:vSphere CNS Block Volume], XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:23 | time="2021-07-05T15:05:23Z" level=debug msg="/csi.v1.Node/NodeGetCapabilities: REP 31261: Capabilities=[rpc:<type:STAGE_UNSTAGE_VOLUME >  rpc:<type:EXPAND_VOLUME >  rpc:<type:GET_VOLUME_STATS > ], XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:23 | time="2021-07-05T15:05:23Z" level=debug msg="/csi.v1.Node/NodeGetCapabilities: REQ 31261: XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:23 | time="2021-07-05T15:05:23Z" level=debug msg="/csi.v1.Node/NodeUnpublishVolume: REP 31260: rpc error: code = Internal desc = error getting block device for volume: 73926cf3-8899-464e-a15f-25e9119fb1b9, err: lstat /dev/sdb: no such file or directory"
  |   | 2021-07-05 18:05:23 | {"level":"info","time":"2021-07-05T15:05:23.515937193Z","caller":"service/node.go:404","msg":"NodeUnpublishVolume: called with args {VolumeId:73926cf3-8899-464e-a15f-25e9119fb1b9 TargetPath:/var/lib/kubelet/pods/5b9e0247-c8b3-48da-becc-d24f76311a48/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}","TraceId":"0b897efb-6569-4a25-80d3-5fbe54ca8601"}
  |   | 2021-07-05 18:05:23 | time="2021-07-05T15:05:23Z" level=debug msg="/csi.v1.Node/NodeUnpublishVolume: REQ 31260: VolumeId=73926cf3-8899-464e-a15f-25e9119fb1b9, TargetPath=/var/lib/kubelet/pods/5b9e0247-c8b3-48da-becc-d24f76311a48/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount, XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:23 | time="2021-07-05T15:05:23Z" level=debug msg="/csi.v1.Node/NodePublishVolume: REP 31259: rpc error: code = FailedPrecondition desc = Volume ID: \"73926cf3-8899-464e-a15f-25e9119fb1b9\" does not appear staged to \"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount\""
  |   | 2021-07-05 18:05:23 | {"level":"info","time":"2021-07-05T15:05:23.425334024Z","caller":"service/node.go:1234","msg":"created directory","TraceId":"a39bcece-df9a-43e3-8ccf-fd7a3abc34f0"}
  |   | 2021-07-05 18:05:23 | {"level":"info","time":"2021-07-05T15:05:23.425268184Z","caller":"service/node.go:1226","msg":"creating directory :\"/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount\"","TraceId":"a39bcece-df9a-43e3-8ccf-fd7a3abc34f0"}
  |   | 2021-07-05 18:05:23 | {"level":"info","time":"2021-07-05T15:05:23.425241697Z","caller":"service/node.go:853","msg":"PublishMountVolume called with args: {volID:73926cf3-8899-464e-a15f-25e9119fb1b9 target:/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount stagingTarget:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount diskID:6000c29f02c8986f7be8ff76a535de90 volumePath:/dev/disk/by-id/wwn-0x6000c29f02c8986f7be8ff76a535de90 device:/dev/sdc ro:false}","TraceId":"a39bcece-df9a-43e3-8ccf-fd7a3abc34f0"}
  |   | 2021-07-05 18:05:23 | {"level":"info","time":"2021-07-05T15:05:23.4251097Z","caller":"service/node.go:346","msg":"NodePublishVolume: called with args {VolumeId:73926cf3-8899-464e-a15f-25e9119fb1b9 PublishContext:map[diskUUID:6000c29f02c8986f7be8ff76a535de90 type:vSphere CNS Block Volume] StagingTargetPath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount TargetPath:/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount VolumeCapability:mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1612496799168-8081-csi.vsphere.vmware.com type:vSphere CNS Block Volume] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}","TraceId":"a39bcece-df9a-43e3-8ccf-fd7a3abc34f0"}
  |   | 2021-07-05 18:05:23 | time="2021-07-05T15:05:23Z" level=debug msg="/csi.v1.Node/NodePublishVolume: REQ 31259: VolumeId=73926cf3-8899-464e-a15f-25e9119fb1b9, PublishContext=map[diskUUID:6000c29f02c8986f7be8ff76a535de90 type:vSphere CNS Block Volume], StagingTargetPath=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount, TargetPath=/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount, VolumeCapability=mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER > , Readonly=false, VolumeContext=map[storage.kubernetes.io/csiProvisionerIdentity:1612496799168-8081-csi.vsphere.vmware.com type:vSphere CNS Block Volume], XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:23 | time="2021-07-05T15:05:23Z" level=debug msg="/csi.v1.Node/NodeGetCapabilities: REP 31258: Capabilities=[rpc:<type:STAGE_UNSTAGE_VOLUME >  rpc:<type:EXPAND_VOLUME >  rpc:<type:GET_VOLUME_STATS > ], XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:23 | time="2021-07-05T15:05:23Z" level=debug msg="/csi.v1.Node/NodeGetCapabilities: REQ 31258: XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:23 | time="2021-07-05T15:05:23Z" level=debug msg="/csi.v1.Node/NodeUnpublishVolume: REP 31257: rpc error: code = Internal desc = error getting block device for volume: 73926cf3-8899-464e-a15f-25e9119fb1b9, err: lstat /dev/sdb: no such file or directory"
  |   | 2021-07-05 18:05:23 | {"level":"info","time":"2021-07-05T15:05:23.114169336Z","caller":"service/node.go:404","msg":"NodeUnpublishVolume: called with args {VolumeId:73926cf3-8899-464e-a15f-25e9119fb1b9 TargetPath:/var/lib/kubelet/pods/5b9e0247-c8b3-48da-becc-d24f76311a48/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}","TraceId":"624bb722-eecc-45ec-bf2b-4a65091cf068"}
  |   | 2021-07-05 18:05:23 | time="2021-07-05T15:05:23Z" level=debug msg="/csi.v1.Node/NodeUnpublishVolume: REQ 31257: VolumeId=73926cf3-8899-464e-a15f-25e9119fb1b9, TargetPath=/var/lib/kubelet/pods/5b9e0247-c8b3-48da-becc-d24f76311a48/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount, XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:23 | time="2021-07-05T15:05:23Z" level=debug msg="/csi.v1.Node/NodePublishVolume: REP 31256: rpc error: code = FailedPrecondition desc = Volume ID: \"73926cf3-8899-464e-a15f-25e9119fb1b9\" does not appear staged to \"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount\""
  |   | 2021-07-05 18:05:23 | {"level":"info","time":"2021-07-05T15:05:23.025517048Z","caller":"service/node.go:1234","msg":"created directory","TraceId":"80605d61-8d5a-49f3-a537-261ccc76635c"}
  |   | 2021-07-05 18:05:23 | {"level":"info","time":"2021-07-05T15:05:23.025451549Z","caller":"service/node.go:1226","msg":"creating directory :\"/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount\"","TraceId":"80605d61-8d5a-49f3-a537-261ccc76635c"}
  |   | 2021-07-05 18:05:23 | {"level":"info","time":"2021-07-05T15:05:23.025424841Z","caller":"service/node.go:853","msg":"PublishMountVolume called with args: {volID:73926cf3-8899-464e-a15f-25e9119fb1b9 target:/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount stagingTarget:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount diskID:6000c29f02c8986f7be8ff76a535de90 volumePath:/dev/disk/by-id/wwn-0x6000c29f02c8986f7be8ff76a535de90 device:/dev/sdc ro:false}","TraceId":"80605d61-8d5a-49f3-a537-261ccc76635c"}
  |   | 2021-07-05 18:05:23 | {"level":"info","time":"2021-07-05T15:05:23.025274688Z","caller":"service/node.go:346","msg":"NodePublishVolume: called with args {VolumeId:73926cf3-8899-464e-a15f-25e9119fb1b9 PublishContext:map[diskUUID:6000c29f02c8986f7be8ff76a535de90 type:vSphere CNS Block Volume] StagingTargetPath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount TargetPath:/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount VolumeCapability:mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1612496799168-8081-csi.vsphere.vmware.com type:vSphere CNS Block Volume] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}","TraceId":"80605d61-8d5a-49f3-a537-261ccc76635c"}
  |   | 2021-07-05 18:05:23 | time="2021-07-05T15:05:23Z" level=debug msg="/csi.v1.Node/NodePublishVolume: REQ 31256: VolumeId=73926cf3-8899-464e-a15f-25e9119fb1b9, PublishContext=map[diskUUID:6000c29f02c8986f7be8ff76a535de90 type:vSphere CNS Block Volume], StagingTargetPath=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/globalmount, TargetPath=/var/lib/kubelet/pods/c02445fc-ee87-4087-ae68-843760816a57/volumes/kubernetes.io~csi/pvc-9dd26215-c898-4942-a9c5-8b218bd399e8/mount, VolumeCapability=mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER > , Readonly=false, VolumeContext=map[storage.kubernetes.io/csiProvisionerIdentity:1612496799168-8081-csi.vsphere.vmware.com type:vSphere CNS Block Volume], XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:22 | time="2021-07-05T15:05:22Z" level=debug msg="/csi.v1.Node/NodeGetCapabilities: REP 31255: Capabilities=[rpc:<type:STAGE_UNSTAGE_VOLUME >  rpc:<type:EXPAND_VOLUME >  rpc:<type:GET_VOLUME_STATS > ], XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:22 | time="2021-07-05T15:05:22Z" level=debug msg="/csi.v1.Node/NodeGetCapabilities: REQ 31255: XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  |   | 2021-07-05 18:05:22 | time="2021-07-05T15:05:22Z" level=debug msg="/csi.v1.Node/NodeUnpublishVolume: REP 31254: rpc error: code = Internal desc = error getting block device for volume: 73926cf3-8899-464e-a15f-25e9119fb1b9, err: lstat /dev/sdb: no such file or directory"

We mount regular ext4 volumes :+1:

RaunakShah commented 3 years ago

Thanks for the detailed logs.

I can see that NodeUnpublishVolume is failing like below:

  |   | 2021-07-05 18:05:23 | time="2021-07-05T15:05:23Z" level=debug msg="/csi.v1.Node/NodeUnpublishVolume: REP 31260: rpc error: code = Internal desc = error getting block device for volume: 73926cf3-8899-464e-a15f-25e9119fb1b9, err: lstat /dev/sdb: no such file or directory"

Which i think is causing all the subsequent failures. @shalini-b have you seen this type of error before during NodeUnpublish calls?

RaunakShah commented 3 years ago

Seems a quick fix for when this happens is to cordon the worker where the workload fails to start. Redeploy the workload and then delete the volumeattachment.

Can you describe the volume attachment when its failing?

LinAnt commented 3 years ago

Can do, it needs to fail again though, since we "fixed" the failed pod by cordoning the node and then restarting the job.

shalini-b commented 3 years ago

Thanks for the detailed logs.

I can see that NodeUnpublishVolume is failing like below:

  |   | 2021-07-05 18:05:23 | time="2021-07-05T15:05:23Z" level=debug msg="/csi.v1.Node/NodeUnpublishVolume: REP 31260: rpc error: code = Internal desc = error getting block device for volume: 73926cf3-8899-464e-a15f-25e9119fb1b9, err: lstat /dev/sdb: no such file or directory"

Which i think is causing all the subsequent failures. @shalini-b have you seen this type of error before during NodeUnpublish calls?

Error seems to be coming from https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/v2.2.1/pkg/csi/service/node.go#L475 implying that the path /dev/sdb doesn't exist. That seems strange because we already checked if the target path is present in the mounts right before this code - https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/v2.2.1/pkg/csi/service/node.go#L433 This basically means the target path along with device path is present in mount points but the device doesn't exist.

LinAnt commented 2 years ago

We just experienced this again for a deployment in another cluster.

A quick fix is to cordon the node where the pod is failing to start up, delete the volume attachment and delete the pod to force the pod to start on another worker.

I am happy to grep logs if someone has an idea what to look for :+1:

RaunakShah commented 2 years ago

Hi @LinAnt next time you run into this, can you increase the vsphere-csi-node and kubelet log levels on the affected node and capture logs? I suspect the issue is that the node restarts, and /dev is unmounted. Can you also check if there were any issues with the node?

Error seems to be coming from https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/v2.2.1/pkg/csi/service/node.go#L475 implying that the path /dev/sdb doesn't exist. That seems strange because we already checked if the target path is present in the mounts right before this code - https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/v2.2.1/pkg/csi/service/node.go#L433 This basically means the target path along with device path is present in mount points but the device doesn't exist.

@shalini-b i think that first we stat the "target path" which is something like /var/lib/kubelet/pods/9e016b5a-5169-451e-a7ae-c1b42cc8a791/volumes/kubernetes.io~csi/pvc-c5bce8b1-186c-4be5-accc-1b649fc65855/mount\.

But where its failing, we lstat which leads to an error like lstat /dev/sdah: no such file or directory\

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

LinAnt commented 2 years ago

This is still happening in a cluster running 2.3.0. We will upgrade to 2.4.x now, and afterwards I will bump the log levels to see what is actually going on..

LinAnt commented 2 years ago

/remove-lifecycle stale

LinAnt commented 2 years ago

Okay, so I got some csi-node logs with loggerlevel: development

2022-02-08T09:24:46.306Z    INFO    logger/logger.go:41 Setting default log level to :"DEVELOPMENT"
2022-02-08T09:24:46.306Z    INFO    vsphere-csi/main.go:56  Version : v2.4.1    {"TraceId": "87146d84-cddd-4e50-bcc1-ccdfc86a9bec"}
2022-02-08T09:24:46.306Z    DEBUG   commonco/utils.go:102   Container orchestrator init params: {InternalFeatureStatesConfigInfo:{Name:internal-feature-states.csi.vsphere.vmware.com Namespace:vmware-system-csi} ServiceMode:node}    {"TraceId": "87146d84-cddd-4e50-bcc1-ccdfc86a9bec"}
2022-02-08T09:24:46.306Z    INFO    logger/logger.go:41 Setting default log level to :"DEVELOPMENT"
2022-02-08T09:24:46.306Z    INFO    k8sorchestrator/k8sorchestrator.go:152  Initializing k8sOrchestratorInstance    {"TraceId": "87146d84-cddd-4e50-bcc1-ccdfc86a9bec", "TraceId": "195cafc6-8dbe-4ebf-b8ce-f0f7637065b8"}
2022-02-08T09:24:46.306Z    INFO    kubernetes/kubernetes.go:85 k8s client using in-cluster config  {"TraceId": "87146d84-cddd-4e50-bcc1-ccdfc86a9bec", "TraceId": "195cafc6-8dbe-4ebf-b8ce-f0f7637065b8"}
2022-02-08T09:24:46.307Z    INFO    kubernetes/kubernetes.go:352    Setting client QPS to 100.000000 and Burst to 100.  {"TraceId": "87146d84-cddd-4e50-bcc1-ccdfc86a9bec", "TraceId": "195cafc6-8dbe-4ebf-b8ce-f0f7637065b8"}
2022-02-08T09:24:46.324Z    INFO    k8sorchestrator/k8sorchestrator.go:258  New internal feature states values stored successfully: map[async-query-volume:true block-volume-snapshot:false csi-auth-check:true csi-migration:false csi-windows-support:false improved-csi-idempotency:true improved-volume-topology:true online-volume-extend:true trigger-csi-fullsync:false] {"TraceId": "87146d84-cddd-4e50-bcc1-ccdfc86a9bec", "TraceId": "195cafc6-8dbe-4ebf-b8ce-f0f7637065b8"}
2022-02-08T09:24:46.324Z    INFO    k8sorchestrator/k8sorchestrator.go:178  k8sOrchestratorInstance initialized {"TraceId": "87146d84-cddd-4e50-bcc1-ccdfc86a9bec", "TraceId": "195cafc6-8dbe-4ebf-b8ce-f0f7637065b8"}
2022-02-08T09:24:46.327Z    INFO    k8sorchestrator/k8sorchestrator.go:484  configMapAdded: Internal feature state values from "internal-feature-states.csi.vsphere.vmware.com" stored successfully: map[async-query-volume:true block-volume-snapshot:false csi-auth-check:true csi-migration:false csi-windows-support:false improved-csi-idempotency:true improved-volume-topology:true online-volume-extend:true trigger-csi-fullsync:false]    {"TraceId": "b4c606fb-86b8-49e6-98ed-eefba9f77bab"}
2022-02-08T09:24:46.331Z    INFO    service/driver.go:109   Configured: "csi.vsphere.vmware.com" with clusterFlavor: "VANILLA" and mode: "node" {"TraceId": "87146d84-cddd-4e50-bcc1-ccdfc86a9bec", "TraceId": "195cafc6-8dbe-4ebf-b8ce-f0f7637065b8"}
2022-02-08T09:24:46.332Z    INFO    service/server.go:132   identity service registered
2022-02-08T09:24:46.332Z    INFO    service/server.go:147   node service registered
2022-02-08T09:24:46.332Z    INFO    service/server.go:153   Listening for connections on address: /csi/csi.sock
2022-02-08T09:24:47.859Z    INFO    service/node.go:335 NodeGetInfo: called with args {XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} {"TraceId": "830f46cb-aad8-4be8-8141-f04e70be3371"}
2022-02-08T09:24:47.859Z    INFO    service/node.go:359 NodeGetInfo: MAX_VOLUMES_PER_NODE is set to 59  {"TraceId": "830f46cb-aad8-4be8-8141-f04e70be3371"}
2022-02-08T09:24:47.859Z    INFO    kubernetes/kubernetes.go:85 k8s client using in-cluster config  {"TraceId": "830f46cb-aad8-4be8-8141-f04e70be3371"}
2022-02-08T09:24:47.859Z    INFO    kubernetes/kubernetes.go:352    Setting client QPS to 100.000000 and Burst to 100.  {"TraceId": "830f46cb-aad8-4be8-8141-f04e70be3371"}
2022-02-08T09:24:47.859Z    INFO    kubernetes/kubernetes.go:85 k8s client using in-cluster config  {"TraceId": "830f46cb-aad8-4be8-8141-f04e70be3371"}
2022-02-08T09:24:47.860Z    INFO    kubernetes/kubernetes.go:352    Setting client QPS to 100.000000 and Burst to 100.  {"TraceId": "830f46cb-aad8-4be8-8141-f04e70be3371"}
2022-02-08T09:24:47.881Z    INFO    k8sorchestrator/topology.go:353 Topology service initiated successfully {"TraceId": "830f46cb-aad8-4be8-8141-f04e70be3371"}
2022-02-08T09:24:47.896Z    INFO    k8sorchestrator/topology.go:396 CSINodeTopology instance already exists for NodeName: "dev-worker-bigmem1"  {"TraceId": "830f46cb-aad8-4be8-8141-f04e70be3371"}
2022-02-08T09:24:47.897Z    INFO    k8sorchestrator/topology.go:466 Timeout is set to 1 minute(s)   {"TraceId": "830f46cb-aad8-4be8-8141-f04e70be3371"}
2022-02-08T09:24:47.901Z    INFO    service/node.go:430 NodeGetInfo response: node_id:"dev-worker-bigmem1" max_volumes_per_node:59 accessible_topology:<>   {"TraceId": "830f46cb-aad8-4be8-8141-f04e70be3371"}
2022-02-08T09:24:48.526Z    INFO    service/node.go:150 NodePublishVolume: called with args {VolumeId:3de5cd7b-693f-41d3-9820-ea9c6ea32480 PublishContext:map[diskUUID:6000c29a36da969051ffa549a497ab72 type:vSphere CNS Block Volume] StagingTargetPath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/globalmount TargetPath:/var/lib/kubelet/pods/144ca723-4a9f-4010-8318-481077b3ec62/volumes/kubernetes.io~csi/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/mount VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1605195819421-8081-csi.vsphere.vmware.com type:vSphere CNS Block Volume] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}   {"TraceId": "9fcf7ab0-8ea0-4dfc-99f8-25f245b6f34a"}
2022-02-08T09:24:48.526Z    DEBUG   osutils/linux_os_utils.go:970   Checking if volume "3de5cd7b-693f-41d3-9820-ea9c6ea32480" is attached to disk "6000c29a36da969051ffa549a497ab72"    {"TraceId": "9fcf7ab0-8ea0-4dfc-99f8-25f245b6f34a"}
2022-02-08T09:24:48.526Z    DEBUG   osutils/linux_os_utils.go:692   found disk: disk ID: "6000c29a36da969051ffa549a497ab72", volume path: "/dev/disk/by-id/wwn-0x6000c29a36da969051ffa549a497ab72"  {"TraceId": "9fcf7ab0-8ea0-4dfc-99f8-25f245b6f34a"}
2022-02-08T09:24:48.526Z    DEBUG   osutils/linux_os_utils.go:979   Device: 0xc000a24490    {"TraceId": "9fcf7ab0-8ea0-4dfc-99f8-25f245b6f34a"}
2022-02-08T09:24:48.527Z    INFO    osutils/linux_os_utils.go:378   PublishMountVolume called with args: {VolID:3de5cd7b-693f-41d3-9820-ea9c6ea32480 Target:/var/lib/kubelet/pods/144ca723-4a9f-4010-8318-481077b3ec62/volumes/kubernetes.io~csi/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/mount StagingTarget:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/globalmount DiskID:6000c29a36da969051ffa549a497ab72 VolumePath:/dev/disk/by-id/wwn-0x6000c29a36da969051ffa549a497ab72 Device:/dev/sdd Ro:false}    {"TraceId": "9fcf7ab0-8ea0-4dfc-99f8-25f245b6f34a"}
2022-02-08T09:24:48.527Z    DEBUG   osutils/linux_os_utils.go:923   FsType received from Volume Capability: "ext4"  {"TraceId": "9fcf7ab0-8ea0-4dfc-99f8-25f245b6f34a"}
2022-02-08T09:24:48.527Z    INFO    osutils/linux_os_utils.go:741   creating directory :"/var/lib/kubelet/pods/144ca723-4a9f-4010-8318-481077b3ec62/volumes/kubernetes.io~csi/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/mount"   {"TraceId": "9fcf7ab0-8ea0-4dfc-99f8-25f245b6f34a"}
2022-02-08T09:24:48.527Z    INFO    osutils/linux_os_utils.go:749   created directory   {"TraceId": "9fcf7ab0-8ea0-4dfc-99f8-25f245b6f34a"}
2022-02-08T09:24:48.527Z    DEBUG   osutils/linux_os_utils.go:393   PublishMountVolume: Created target path "/var/lib/kubelet/pods/144ca723-4a9f-4010-8318-481077b3ec62/volumes/kubernetes.io~csi/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/mount"   {"TraceId": "9fcf7ab0-8ea0-4dfc-99f8-25f245b6f34a"}
2022-02-08T09:24:48.527Z    DEBUG   osutils/linux_os_utils.go:733   Target path /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/globalmount verification complete    {"TraceId": "9fcf7ab0-8ea0-4dfc-99f8-25f245b6f34a"}
2022-02-08T09:24:48.530Z    DEBUG   osutils/linux_os_utils.go:406   publishMountVol: device {FullPath:/dev/disk/by-id/wwn-0x6000c29a36da969051ffa549a497ab72 Name:wwn-0x6000c29a36da969051ffa549a497ab72 RealDev:/dev/sdd}, device mounts []    {"TraceId": "9fcf7ab0-8ea0-4dfc-99f8-25f245b6f34a"}
2022-02-08T09:24:48.530Z    ERROR   osutils/linux_os_utils.go:432   volume ID: "3de5cd7b-693f-41d3-9820-ea9c6ea32480" does not appear staged to "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/globalmount"    {"TraceId": "9fcf7ab0-8ea0-4dfc-99f8-25f245b6f34a"}
sigs.k8s.io/vsphere-csi-driver/v2/pkg/csi/service/osutils.(*OsUtils).PublishMountVol
    /build/pkg/csi/service/osutils/linux_os_utils.go:432
sigs.k8s.io/vsphere-csi-driver/v2/pkg/csi/service.(*vsphereCSIDriver).NodePublishVolume
    /build/pkg/csi/service/node.go:196
github.com/container-storage-interface/spec/lib/go/csi._Node_NodePublishVolume_Handler
    /go/pkg/mod/github.com/container-storage-interface/spec@v1.4.0/lib/go/csi/csi.pb.go:6059
google.golang.org/grpc.(*Server).processUnaryRPC
    /go/pkg/mod/google.golang.org/grpc@v1.27.1/server.go:1024
google.golang.org/grpc.(*Server).handleStream
    /go/pkg/mod/google.golang.org/grpc@v1.27.1/server.go:1313
google.golang.org/grpc.(*Server).serveStreams.func1.1
    /go/pkg/mod/google.golang.org/grpc@v1.27.1/server.go:722
2022-02-08T09:24:48.553Z    INFO    service/node.go:208 NodeUnpublishVolume: called with args {VolumeId:3de5cd7b-693f-41d3-9820-ea9c6ea32480 TargetPath:/var/lib/kubelet/pods/62400cd4-61f7-4237-8961-7cb8a1e150e9/volumes/kubernetes.io~csi/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/mount XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}    {"TraceId": "e9c15efd-9a4d-4ed7-9840-9e085326980d"}
2022-02-08T09:24:48.556Z    DEBUG   osutils/linux_os_utils.go:266   NodeUnpublishVolume: node mounts [{Device:/dev/sda1 Path:/csi Source:/dev/sda1 Type:ext4 Opts:[rw relatime]} {Device:udev Path:/dev Source:udev Type:devtmpfs Opts:[rw nosuid noexec relatime]} {Device:/dev/sda1 Path:/dev/termination-log Source:/csi/var/lib/kubelet/pods/296f4362-b5c9-4c36-8110-b525d8f30f4f/containers/vsphere-csi-node/dfb73aae Type:ext4 Opts:[rw relatime]} {Device:/dev/sda1 Path:/etc/resolv.conf Source:/csi/var/lib/docker/containers/0a6c3c72ad44ad3d57c7bd2bd0356b2be3479f0348783dea740510451a02a1ab/resolv.conf Type:ext4 Opts:[rw relatime]} {Device:/dev/sda1 Path:/etc/hostname Source:/csi/var/lib/docker/containers/0a6c3c72ad44ad3d57c7bd2bd0356b2be3479f0348783dea740510451a02a1ab/hostname Type:ext4 Opts:[rw relatime]} {Device:/dev/sda1 Path:/etc/hosts Source:/csi/var/lib/kubelet/pods/296f4362-b5c9-4c36-8110-b525d8f30f4f/etc-hosts Type:ext4 Opts:[rw relatime]} {Device:/dev/sda1 Path:/var/lib/kubelet Source:/csi/var/lib/kubelet Type:ext4 Opts:[rw relatime]} {Device:/dev/sdc Path:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f203d574-4869-429f-9eb2-126781ef9fcd/globalmount Source:/dev/sdc Type:ext4 Opts:[rw relatime]} {Device:/dev/sde Path:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-09924968-99c7-47d6-8d01-0998d517c805/globalmount Source:/dev/sde Type:ext4 Opts:[rw relatime]} {Device:/dev/sde Path:/var/lib/kubelet/pods/8a2561e2-370a-4fd0-89ee-6f56ebabcf4c/volumes/kubernetes.io~csi/pvc-09924968-99c7-47d6-8d01-0998d517c805/mount Source:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-09924968-99c7-47d6-8d01-0998d517c805/globalmount Type:ext4 Opts:[rw relatime]} {Device:/dev/sdc Path:/var/lib/kubelet/pods/8a2561e2-370a-4fd0-89ee-6f56ebabcf4c/volumes/kubernetes.io~csi/pvc-f203d574-4869-429f-9eb2-126781ef9fcd/mount Source:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f203d574-4869-429f-9eb2-126781ef9fcd/globalmount Type:ext4 Opts:[rw relatime]} {Device:/dev/sdb Path:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/globalmount Source:/dev/sdb Type:ext4 Opts:[rw relatime]} {Device:/dev/sdb Path:/var/lib/kubelet/pods/62400cd4-61f7-4237-8961-7cb8a1e150e9/volumes/kubernetes.io~csi/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/mount Source:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/globalmount Type:ext4 Opts:[rw relatime]} {Device:/dev/sdf Path:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d12c122f-1df4-4a09-85ad-df86e8e82f4e/globalmount Source:/dev/sdf Type:ext4 Opts:[rw relatime]} {Device:/dev/sdf Path:/var/lib/kubelet/pods/b1fc32ea-b4b9-417c-9ac7-4753d7ec9c50/volumes/kubernetes.io~csi/pvc-d12c122f-1df4-4a09-85ad-df86e8e82f4e/mount Source:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d12c122f-1df4-4a09-85ad-df86e8e82f4e/globalmount Type:ext4 Opts:[rw relatime]}] {"TraceId": "e9c15efd-9a4d-4ed7-9840-9e085326980d"}
2022-02-08T09:24:48.556Z    DEBUG   osutils/linux_os_utils.go:1014  Found target "/var/lib/kubelet/pods/62400cd4-61f7-4237-8961-7cb8a1e150e9/volumes/kubernetes.io~csi/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/mount" in list of mounts    {"TraceId": "e9c15efd-9a4d-4ed7-9840-9e085326980d"}
2022-02-08T09:24:48.556Z    DEBUG   osutils/linux_os_utils.go:1000  IsFileVolumeMount: Found block volume   {"TraceId": "e9c15efd-9a4d-4ed7-9840-9e085326980d"}
2022-02-08T09:24:48.559Z    ERROR   osutils/linux_os_utils.go:314   error getting block device for volume: 3de5cd7b-693f-41d3-9820-ea9c6ea32480, err: lstat /dev/sdb: no such file or directory {"TraceId": "e9c15efd-9a4d-4ed7-9840-9e085326980d", "TraceId": "23492077-957b-41c3-a5b1-9f1196a3bab5"}
sigs.k8s.io/vsphere-csi-driver/v2/pkg/csi/service/osutils.(*OsUtils).IsBlockVolumePublished
    /build/pkg/csi/service/osutils/linux_os_utils.go:314
sigs.k8s.io/vsphere-csi-driver/v2/pkg/csi/service/osutils.(*OsUtils).CleanupPublishPath
    /build/pkg/csi/service/osutils/linux_os_utils.go:281
sigs.k8s.io/vsphere-csi-driver/v2/pkg/csi/service.(*vsphereCSIDriver).NodeUnpublishVolume
    /build/pkg/csi/service/node.go:218
github.com/container-storage-interface/spec/lib/go/csi._Node_NodeUnpublishVolume_Handler
    /go/pkg/mod/github.com/container-storage-interface/spec@v1.4.0/lib/go/csi/csi.pb.go:6077
google.golang.org/grpc.(*Server).processUnaryRPC
    /go/pkg/mod/google.golang.org/grpc@v1.27.1/server.go:1024
google.golang.org/grpc.(*Server).handleStream
    /go/pkg/mod/google.golang.org/grpc@v1.27.1/server.go:1313
google.golang.org/grpc.(*Server).serveStreams.func1.1
    /go/pkg/mod/google.golang.org/grpc@v1.27.1/server.go:722
2022-02-08T09:24:48.559Z    ERROR   service/node.go:219 Unmount failed: rpc error: code = Internal desc = error getting block device for volume: 3de5cd7b-693f-41d3-9820-ea9c6ea32480, err: lstat /dev/sdb: no such file or directory
Unmounting arguments: /var/lib/kubelet/pods/62400cd4-61f7-4237-8961-7cb8a1e150e9/volumes/kubernetes.io~csi/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/mount
    {"TraceId": "e9c15efd-9a4d-4ed7-9840-9e085326980d"}
sigs.k8s.io/vsphere-csi-driver/v2/pkg/csi/service.(*vsphereCSIDriver).NodeUnpublishVolume
    /build/pkg/csi/service/node.go:219
github.com/container-storage-interface/spec/lib/go/csi._Node_NodeUnpublishVolume_Handler
    /go/pkg/mod/github.com/container-storage-interface/spec@v1.4.0/lib/go/csi/csi.pb.go:6077
google.golang.org/grpc.(*Server).processUnaryRPC
    /go/pkg/mod/google.golang.org/grpc@v1.27.1/server.go:1024
google.golang.org/grpc.(*Server).handleStream
    /go/pkg/mod/google.golang.org/grpc@v1.27.1/server.go:1313
google.golang.org/grpc.(*Server).serveStreams.func1.1
    /go/pkg/mod/google.golang.org/grpc@v1.27.1/server.go:722
2022-02-08T09:24:49.325Z    INFO    service/node.go:150 NodePublishVolume: called with args {VolumeId:3de5cd7b-693f-41d3-9820-ea9c6ea32480 PublishContext:map[diskUUID:6000c29a36da969051ffa549a497ab72 type:vSphere CNS Block Volume] StagingTargetPath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/globalmount TargetPath:/var/lib/kubelet/pods/144ca723-4a9f-4010-8318-481077b3ec62/volumes/kubernetes.io~csi/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/mount VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1605195819421-8081-csi.vsphere.vmware.com type:vSphere CNS Block Volume] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}   {"TraceId": "483d2d10-24de-48cf-bfcd-cf7e043c528f"}
2022-02-08T09:24:49.325Z    DEBUG   osutils/linux_os_utils.go:970   Checking if volume "3de5cd7b-693f-41d3-9820-ea9c6ea32480" is attached to disk "6000c29a36da969051ffa549a497ab72"    {"TraceId": "483d2d10-24de-48cf-bfcd-cf7e043c528f"}
2022-02-08T09:24:49.326Z    DEBUG   osutils/linux_os_utils.go:692   found disk: disk ID: "6000c29a36da969051ffa549a497ab72", volume path: "/dev/disk/by-id/wwn-0x6000c29a36da969051ffa549a497ab72"  {"TraceId": "483d2d10-24de-48cf-bfcd-cf7e043c528f"}
2022-02-08T09:24:49.326Z    DEBUG   osutils/linux_os_utils.go:979   Device: 0xc00000e030    {"TraceId": "483d2d10-24de-48cf-bfcd-cf7e043c528f"}
2022-02-08T09:24:49.326Z    INFO    osutils/linux_os_utils.go:378   PublishMountVolume called with args: {VolID:3de5cd7b-693f-41d3-9820-ea9c6ea32480 Target:/var/lib/kubelet/pods/144ca723-4a9f-4010-8318-481077b3ec62/volumes/kubernetes.io~csi/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/mount StagingTarget:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/globalmount DiskID:6000c29a36da969051ffa549a497ab72 VolumePath:/dev/disk/by-id/wwn-0x6000c29a36da969051ffa549a497ab72 Device:/dev/sdd Ro:false}    {"TraceId": "483d2d10-24de-48cf-bfcd-cf7e043c528f"}
2022-02-08T09:24:49.326Z    DEBUG   osutils/linux_os_utils.go:923   FsType received from Volume Capability: "ext4"  {"TraceId": "483d2d10-24de-48cf-bfcd-cf7e043c528f"}
2022-02-08T09:24:49.326Z    INFO    osutils/linux_os_utils.go:741   creating directory :"/var/lib/kubelet/pods/144ca723-4a9f-4010-8318-481077b3ec62/volumes/kubernetes.io~csi/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/mount"   {"TraceId": "483d2d10-24de-48cf-bfcd-cf7e043c528f"}
2022-02-08T09:24:49.326Z    INFO    osutils/linux_os_utils.go:749   created directory   {"TraceId": "483d2d10-24de-48cf-bfcd-cf7e043c528f"}
2022-02-08T09:24:49.326Z    DEBUG   osutils/linux_os_utils.go:393   PublishMountVolume: Created target path "/var/lib/kubelet/pods/144ca723-4a9f-4010-8318-481077b3ec62/volumes/kubernetes.io~csi/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/mount"   {"TraceId": "483d2d10-24de-48cf-bfcd-cf7e043c528f"}
2022-02-08T09:24:49.326Z    DEBUG   osutils/linux_os_utils.go:733   Target path /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/globalmount verification complete    {"TraceId": "483d2d10-24de-48cf-bfcd-cf7e043c528f"}
2022-02-08T09:24:49.329Z    DEBUG   osutils/linux_os_utils.go:406   publishMountVol: device {FullPath:/dev/disk/by-id/wwn-0x6000c29a36da969051ffa549a497ab72 Name:wwn-0x6000c29a36da969051ffa549a497ab72 RealDev:/dev/sdd}, device mounts []    {"TraceId": "483d2d10-24de-48cf-bfcd-cf7e043c528f"}
2022-02-08T09:24:49.329Z    ERROR   osutils/linux_os_utils.go:432   volume ID: "3de5cd7b-693f-41d3-9820-ea9c6ea32480" does not appear staged to "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/globalmount"    {"TraceId": "483d2d10-24de-48cf-bfcd-cf7e043c528f"}
sigs.k8s.io/vsphere-csi-driver/v2/pkg/csi/service/osutils.(*OsUtils).PublishMountVol
    /build/pkg/csi/service/osutils/linux_os_utils.go:432
sigs.k8s.io/vsphere-csi-driver/v2/pkg/csi/service.(*vsphereCSIDriver).NodePublishVolume
    /build/pkg/csi/service/node.go:196
github.com/container-storage-interface/spec/lib/go/csi._Node_NodePublishVolume_Handler
    /go/pkg/mod/github.com/container-storage-interface/spec@v1.4.0/lib/go/csi/csi.pb.go:6059
google.golang.org/grpc.(*Server).processUnaryRPC
    /go/pkg/mod/google.golang.org/grpc@v1.27.1/server.go:1024
google.golang.org/grpc.(*Server).handleStream
    /go/pkg/mod/google.golang.org/grpc@v1.27.1/server.go:1313
google.golang.org/grpc.(*Server).serveStreams.func1.1
    /go/pkg/mod/google.golang.org/grpc@v1.27.1/server.go:722
2022-02-08T09:24:49.362Z    INFO    service/node.go:208 NodeUnpublishVolume: called with args {VolumeId:3de5cd7b-693f-41d3-9820-ea9c6ea32480 TargetPath:/var/lib/kubelet/pods/62400cd4-61f7-4237-8961-7cb8a1e150e9/volumes/kubernetes.io~csi/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/mount XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}    {"TraceId": "de9c8444-f7d5-4a3f-98da-a42f468e2d4c"}
2022-02-08T09:24:49.365Z    DEBUG   osutils/linux_os_utils.go:266   NodeUnpublishVolume: node mounts [{Device:/dev/sda1 Path:/csi Source:/dev/sda1 Type:ext4 Opts:[rw relatime]} {Device:udev Path:/dev Source:udev Type:devtmpfs Opts:[rw nosuid noexec relatime]} {Device:/dev/sda1 Path:/dev/termination-log Source:/csi/var/lib/kubelet/pods/296f4362-b5c9-4c36-8110-b525d8f30f4f/containers/vsphere-csi-node/dfb73aae Type:ext4 Opts:[rw relatime]} {Device:/dev/sda1 Path:/etc/resolv.conf Source:/csi/var/lib/docker/containers/0a6c3c72ad44ad3d57c7bd2bd0356b2be3479f0348783dea740510451a02a1ab/resolv.conf Type:ext4 Opts:[rw relatime]} {Device:/dev/sda1 Path:/etc/hostname Source:/csi/var/lib/docker/containers/0a6c3c72ad44ad3d57c7bd2bd0356b2be3479f0348783dea740510451a02a1ab/hostname Type:ext4 Opts:[rw relatime]} {Device:/dev/sda1 Path:/etc/hosts Source:/csi/var/lib/kubelet/pods/296f4362-b5c9-4c36-8110-b525d8f30f4f/etc-hosts Type:ext4 Opts:[rw relatime]} {Device:/dev/sda1 Path:/var/lib/kubelet Source:/csi/var/lib/kubelet Type:ext4 Opts:[rw relatime]} {Device:/dev/sdc Path:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f203d574-4869-429f-9eb2-126781ef9fcd/globalmount Source:/dev/sdc Type:ext4 Opts:[rw relatime]} {Device:/dev/sde Path:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-09924968-99c7-47d6-8d01-0998d517c805/globalmount Source:/dev/sde Type:ext4 Opts:[rw relatime]} {Device:/dev/sde Path:/var/lib/kubelet/pods/8a2561e2-370a-4fd0-89ee-6f56ebabcf4c/volumes/kubernetes.io~csi/pvc-09924968-99c7-47d6-8d01-0998d517c805/mount Source:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-09924968-99c7-47d6-8d01-0998d517c805/globalmount Type:ext4 Opts:[rw relatime]} {Device:/dev/sdc Path:/var/lib/kubelet/pods/8a2561e2-370a-4fd0-89ee-6f56ebabcf4c/volumes/kubernetes.io~csi/pvc-f203d574-4869-429f-9eb2-126781ef9fcd/mount Source:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f203d574-4869-429f-9eb2-126781ef9fcd/globalmount Type:ext4 Opts:[rw relatime]} {Device:/dev/sdb Path:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/globalmount Source:/dev/sdb Type:ext4 Opts:[rw relatime]} {Device:/dev/sdb Path:/var/lib/kubelet/pods/62400cd4-61f7-4237-8961-7cb8a1e150e9/volumes/kubernetes.io~csi/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/mount Source:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/globalmount Type:ext4 Opts:[rw relatime]} {Device:/dev/sdf Path:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d12c122f-1df4-4a09-85ad-df86e8e82f4e/globalmount Source:/dev/sdf Type:ext4 Opts:[rw relatime]} {Device:/dev/sdf Path:/var/lib/kubelet/pods/b1fc32ea-b4b9-417c-9ac7-4753d7ec9c50/volumes/kubernetes.io~csi/pvc-d12c122f-1df4-4a09-85ad-df86e8e82f4e/mount Source:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d12c122f-1df4-4a09-85ad-df86e8e82f4e/globalmount Type:ext4 Opts:[rw relatime]}] {"TraceId": "de9c8444-f7d5-4a3f-98da-a42f468e2d4c"}
2022-02-08T09:24:49.365Z    DEBUG   osutils/linux_os_utils.go:1014  Found target "/var/lib/kubelet/pods/62400cd4-61f7-4237-8961-7cb8a1e150e9/volumes/kubernetes.io~csi/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/mount" in list of mounts    {"TraceId": "de9c8444-f7d5-4a3f-98da-a42f468e2d4c"}
2022-02-08T09:24:49.365Z    DEBUG   osutils/linux_os_utils.go:1000  IsFileVolumeMount: Found block volume   {"TraceId": "de9c8444-f7d5-4a3f-98da-a42f468e2d4c"}
2022-02-08T09:24:49.367Z    ERROR   osutils/linux_os_utils.go:314   error getting block device for volume: 3de5cd7b-693f-41d3-9820-ea9c6ea32480, err: lstat /dev/sdb: no such file or directory {"TraceId": "de9c8444-f7d5-4a3f-98da-a42f468e2d4c", "TraceId": "04cf9711-9562-47b3-bbb6-0d51502b94a6"}
sigs.k8s.io/vsphere-csi-driver/v2/pkg/csi/service/osutils.(*OsUtils).IsBlockVolumePublished
    /build/pkg/csi/service/osutils/linux_os_utils.go:314
sigs.k8s.io/vsphere-csi-driver/v2/pkg/csi/service/osutils.(*OsUtils).CleanupPublishPath
    /build/pkg/csi/service/osutils/linux_os_utils.go:281
sigs.k8s.io/vsphere-csi-driver/v2/pkg/csi/service.(*vsphereCSIDriver).NodeUnpublishVolume
    /build/pkg/csi/service/node.go:218
github.com/container-storage-interface/spec/lib/go/csi._Node_NodeUnpublishVolume_Handler
    /go/pkg/mod/github.com/container-storage-interface/spec@v1.4.0/lib/go/csi/csi.pb.go:6077
google.golang.org/grpc.(*Server).processUnaryRPC
    /go/pkg/mod/google.golang.org/grpc@v1.27.1/server.go:1024
google.golang.org/grpc.(*Server).handleStream
    /go/pkg/mod/google.golang.org/grpc@v1.27.1/server.go:1313
google.golang.org/grpc.(*Server).serveStreams.func1.1
    /go/pkg/mod/google.golang.org/grpc@v1.27.1/server.go:722
2022-02-08T09:24:49.367Z    ERROR   service/node.go:219 Unmount failed: rpc error: code = Internal desc = error getting block device for volume: 3de5cd7b-693f-41d3-9820-ea9c6ea32480, err: lstat /dev/sdb: no such file or directory
Unmounting arguments: /var/lib/kubelet/pods/62400cd4-61f7-4237-8961-7cb8a1e150e9/volumes/kubernetes.io~csi/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/mount
    {"TraceId": "de9c8444-f7d5-4a3f-98da-a42f468e2d4c"}
sigs.k8s.io/vsphere-csi-driver/v2/pkg/csi/service.(*vsphereCSIDriver).NodeUnpublishVolume
    /build/pkg/csi/service/node.go:219
github.com/container-storage-interface/spec/lib/go/csi._Node_NodeUnpublishVolume_Handler
    /go/pkg/mod/github.com/container-storage-interface/spec@v1.4.0/lib/go/csi/csi.pb.go:6077
google.golang.org/grpc.(*Server).processUnaryRPC
    /go/pkg/mod/google.golang.org/grpc@v1.27.1/server.go:1024
google.golang.org/grpc.(*Server).handleStream
    /go/pkg/mod/google.golang.org/grpc@v1.27.1/server.go:1313
google.golang.org/grpc.(*Server).serveStreams.func1.1
    /go/pkg/mod/google.golang.org/grpc@v1.27.1/server.go:722
2022-02-08T09:24:49.924Z    INFO    service/node.go:150 NodePublishVolume: called with args {VolumeId:3de5cd7b-693f-41d3-9820-ea9c6ea32480 PublishContext:map[diskUUID:6000c29a36da969051ffa549a497ab72 type:vSphere CNS Block Volume] StagingTargetPath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/globalmount TargetPath:/var/lib/kubelet/pods/144ca723-4a9f-4010-8318-481077b3ec62/volumes/kubernetes.io~csi/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/mount VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1605195819421-8081-csi.vsphere.vmware.com type:vSphere CNS Block Volume] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}   {"TraceId": "ef841355-4f11-4f6d-8b63-223ef430086c"}
2022-02-08T09:24:49.924Z    DEBUG   osutils/linux_os_utils.go:970   Checking if volume "3de5cd7b-693f-41d3-9820-ea9c6ea32480" is attached to disk "6000c29a36da969051ffa549a497ab72"    {"TraceId": "ef841355-4f11-4f6d-8b63-223ef430086c"}
LinAnt commented 2 years ago

And here are some kubelet logs from the node:

{"log":"I0208 09:32:59.559846 3212256 clientconn.go:106] scheme \"\" not registered, fallback to default scheme\n","stream":"stderr","time":"2022-02-08T09:32:59.559903318Z"}
{"log":"I0208 09:32:59.559882 3212256 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins/csi.vsphere.vmware.com/csi.sock  \u003cnil\u003e 0 \u003cnil\u003e}] \u003cnil\u003e \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:32:59.559952445Z"}
{"log":"I0208 09:32:59.559890 3212256 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\n","stream":"stderr","time":"2022-02-08T09:32:59.559968683Z"}
{"log":"I0208 09:32:59.559928 3212256 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick\n","stream":"stderr","time":"2022-02-08T09:32:59.559975379Z"}
{"log":"I0208 09:32:59.559964 3212256 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc003effaf0, {CONNECTING \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:32:59.560008793Z"}
{"log":"I0208 09:32:59.560186 3212256 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc003effaf0, {READY \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:32:59.56024554Z"}
{"log":"E0208 09:32:59.565506 3212256 nestedpendingoperations.go:301] Operation for \"{volumeName:kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480 podName:62400cd4-61f7-4237-8961-7cb8a1e150e9 nodeName:}\" failed. No retries permitted until 2022-02-08 09:33:00.06547101 +0000 UTC m=+4665365.989498863 (durationBeforeRetry 500ms). Error: \"UnmountVolume.TearDown failed for volume \\\"pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c\\\" (UniqueName: \\\"kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480\\\") pod \\\"62400cd4-61f7-4237-8961-7cb8a1e150e9\\\" (UID: \\\"62400cd4-61f7-4237-8961-7cb8a1e150e9\\\") : kubernetes.io/csi: mounter.TearDownAt failed: rpc error: code = Internal desc = Unmount failed: rpc error: code = Internal desc = error getting block device for volume: 3de5cd7b-693f-41d3-9820-ea9c6ea32480, err: lstat /dev/sdb: no such file or directory\\nUnmounting arguments: /var/lib/kubelet/pods/62400cd4-61f7-4237-8961-7cb8a1e150e9/volumes/kubernetes.io~csi/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/mount\\n\"\n","stream":"stderr","time":"2022-02-08T09:32:59.565611352Z"}
{"log":"I0208 09:32:59.565536 3212256 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\n","stream":"stderr","time":"2022-02-08T09:32:59.565629835Z"}
{"log":"I0208 09:32:59.660857 3212256 reconciler.go:269] \"operationExecutor.MountVolume started for volume \\\"pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c\\\" (UniqueName: \\\"kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480\\\") pod \\\"dev-loki-0\\\" (UID: \\\"144ca723-4a9f-4010-8318-481077b3ec62\\\") \"\n","stream":"stderr","time":"2022-02-08T09:32:59.66181947Z"}
{"log":"I0208 09:32:59.661324 3212256 operation_generator.go:575] MountVolume.WaitForAttach entering for volume \"pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c\" (UniqueName: \"kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480\") pod \"dev-loki-0\" (UID: \"144ca723-4a9f-4010-8318-481077b3ec62\") DevicePath \"\"\n","stream":"stderr","time":"2022-02-08T09:32:59.661841598Z"}
{"log":"I0208 09:32:59.723981 3212256 operation_generator.go:585] MountVolume.WaitForAttach succeeded for volume \"pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c\" (UniqueName: \"kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480\") pod \"dev-loki-0\" (UID: \"144ca723-4a9f-4010-8318-481077b3ec62\") DevicePath \"csi-5ae5f2da9cf125e8ad6c5ca3076ee4d369e25f2fd7061fd92d0d7fe9a3a1f2ec\"\n","stream":"stderr","time":"2022-02-08T09:32:59.72441655Z"}
{"log":"I0208 09:32:59.724070 3212256 clientconn.go:106] parsed scheme: \"\"\n","stream":"stderr","time":"2022-02-08T09:32:59.724512589Z"}
{"log":"I0208 09:32:59.724081 3212256 clientconn.go:106] scheme \"\" not registered, fallback to default scheme\n","stream":"stderr","time":"2022-02-08T09:32:59.724531029Z"}
{"log":"I0208 09:32:59.724106 3212256 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins/csi.vsphere.vmware.com/csi.sock  \u003cnil\u003e 0 \u003cnil\u003e}] \u003cnil\u003e \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:32:59.724539222Z"}
{"log":"I0208 09:32:59.724116 3212256 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\n","stream":"stderr","time":"2022-02-08T09:32:59.724618031Z"}
{"log":"I0208 09:32:59.724173 3212256 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick\n","stream":"stderr","time":"2022-02-08T09:32:59.724626641Z"}
{"log":"I0208 09:32:59.724285 3212256 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc0022e7560, {CONNECTING \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:32:59.724634417Z"}
{"log":"I0208 09:32:59.724580 3212256 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc0022e7560, {READY \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:32:59.724831826Z"}
{"log":"I0208 09:32:59.725744 3212256 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\n","stream":"stderr","time":"2022-02-08T09:32:59.726035209Z"}
{"log":"I0208 09:32:59.925598 3212256 clientconn.go:106] parsed scheme: \"\"\n","stream":"stderr","time":"2022-02-08T09:32:59.92590795Z"}
{"log":"I0208 09:32:59.925635 3212256 clientconn.go:106] scheme \"\" not registered, fallback to default scheme\n","stream":"stderr","time":"2022-02-08T09:32:59.925940964Z"}
{"log":"I0208 09:32:59.925677 3212256 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins/csi.vsphere.vmware.com/csi.sock  \u003cnil\u003e 0 \u003cnil\u003e}] \u003cnil\u003e \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:32:59.925961532Z"}
{"log":"I0208 09:32:59.925687 3212256 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\n","stream":"stderr","time":"2022-02-08T09:32:59.9277654Z"}
{"log":"I0208 09:32:59.925761 3212256 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick\n","stream":"stderr","time":"2022-02-08T09:32:59.92777991Z"}
{"log":"I0208 09:32:59.925812 3212256 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc001a48e60, {CONNECTING \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:32:59.927784453Z"}
{"log":"I0208 09:32:59.926460 3212256 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc001a48e60, {READY \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:32:59.927791832Z"}
{"log":"I0208 09:32:59.932299 3212256 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\n","stream":"stderr","time":"2022-02-08T09:32:59.932430662Z"}
{"log":"E0208 09:32:59.932496 3212256 nestedpendingoperations.go:301] Operation for \"{volumeName:kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480 podName: nodeName:}\" failed. No retries permitted until 2022-02-08 09:33:00.432457467 +0000 UTC m=+4665366.356485317 (durationBeforeRetry 500ms). Error: \"MountVolume.SetUp failed for volume \\\"pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c\\\" (UniqueName: \\\"kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480\\\") pod \\\"dev-loki-0\\\" (UID: \\\"144ca723-4a9f-4010-8318-481077b3ec62\\\") : rpc error: code = FailedPrecondition desc = volume ID: \\\"3de5cd7b-693f-41d3-9820-ea9c6ea32480\\\" does not appear staged to \\\"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/globalmount\\\"\"\n","stream":"stderr","time":"2022-02-08T09:32:59.932585967Z"}
{"log":"I0208 09:32:59.964232 3212256 reconciler.go:196] \"operationExecutor.UnmountVolume started for volume \\\"pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c\\\" (UniqueName: \\\"kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480\\\") pod \\\"62400cd4-61f7-4237-8961-7cb8a1e150e9\\\" (UID: \\\"62400cd4-61f7-4237-8961-7cb8a1e150e9\\\") \"\n","stream":"stderr","time":"2022-02-08T09:32:59.964456189Z"}
{"log":"I0208 09:32:59.964425 3212256 clientconn.go:106] parsed scheme: \"\"\n","stream":"stderr","time":"2022-02-08T09:32:59.964498178Z"}
{"log":"I0208 09:32:59.964449 3212256 clientconn.go:106] scheme \"\" not registered, fallback to default scheme\n","stream":"stderr","time":"2022-02-08T09:32:59.964506961Z"}
{"log":"I0208 09:32:59.964511 3212256 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins/csi.vsphere.vmware.com/csi.sock  \u003cnil\u003e 0 \u003cnil\u003e}] \u003cnil\u003e \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:32:59.964612937Z"}
{"log":"I0208 09:32:59.964529 3212256 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\n","stream":"stderr","time":"2022-02-08T09:32:59.964630702Z"}
{"log":"I0208 09:32:59.964584 3212256 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick\n","stream":"stderr","time":"2022-02-08T09:32:59.964654026Z"}
{"log":"I0208 09:32:59.964603 3212256 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc002a6b710, {CONNECTING \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:32:59.964690706Z"}
{"log":"I0208 09:32:59.967015 3212256 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc002a6b710, {READY \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:32:59.967173722Z"}
{"log":"E0208 09:32:59.973599 3212256 nestedpendingoperations.go:301] Operation for \"{volumeName:kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480 podName:62400cd4-61f7-4237-8961-7cb8a1e150e9 nodeName:}\" failed. No retries permitted until 2022-02-08 09:33:00.473554194 +0000 UTC m=+4665366.397582042 (durationBeforeRetry 500ms). Error: \"UnmountVolume.TearDown failed for volume \\\"pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c\\\" (UniqueName: \\\"kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480\\\") pod \\\"62400cd4-61f7-4237-8961-7cb8a1e150e9\\\" (UID: \\\"62400cd4-61f7-4237-8961-7cb8a1e150e9\\\") : kubernetes.io/csi: mounter.TearDownAt failed: rpc error: code = Internal desc = Unmount failed: rpc error: code = Internal desc = error getting block device for volume: 3de5cd7b-693f-41d3-9820-ea9c6ea32480, err: lstat /dev/sdb: no such file or directory\\nUnmounting arguments: /var/lib/kubelet/pods/62400cd4-61f7-4237-8961-7cb8a1e150e9/volumes/kubernetes.io~csi/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/mount\\n\"\n","stream":"stderr","time":"2022-02-08T09:32:59.973747325Z"}
{"log":"I0208 09:32:59.973659 3212256 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\n","stream":"stderr","time":"2022-02-08T09:32:59.973763962Z"}
{"log":"I0208 09:33:00.065752 3212256 reconciler.go:269] \"operationExecutor.MountVolume started for volume \\\"pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c\\\" (UniqueName: \\\"kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480\\\") pod \\\"dev-loki-0\\\" (UID: \\\"144ca723-4a9f-4010-8318-481077b3ec62\\\") \"\n","stream":"stderr","time":"2022-02-08T09:33:00.065938365Z"}
{"log":"I0208 09:33:00.066174 3212256 operation_generator.go:575] MountVolume.WaitForAttach entering for volume \"pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c\" (UniqueName: \"kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480\") pod \"dev-loki-0\" (UID: \"144ca723-4a9f-4010-8318-481077b3ec62\") DevicePath \"\"\n","stream":"stderr","time":"2022-02-08T09:33:00.066356755Z"}
{"log":"I0208 09:33:00.124578 3212256 operation_generator.go:585] MountVolume.WaitForAttach succeeded for volume \"pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c\" (UniqueName: \"kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480\") pod \"dev-loki-0\" (UID: \"144ca723-4a9f-4010-8318-481077b3ec62\") DevicePath \"csi-5ae5f2da9cf125e8ad6c5ca3076ee4d369e25f2fd7061fd92d0d7fe9a3a1f2ec\"\n","stream":"stderr","time":"2022-02-08T09:33:00.124704119Z"}
{"log":"I0208 09:33:00.124720 3212256 clientconn.go:106] parsed scheme: \"\"\n","stream":"stderr","time":"2022-02-08T09:33:00.124798353Z"}
{"log":"I0208 09:33:00.124744 3212256 clientconn.go:106] scheme \"\" not registered, fallback to default scheme\n","stream":"stderr","time":"2022-02-08T09:33:00.124814563Z"}
{"log":"I0208 09:33:00.124801 3212256 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins/csi.vsphere.vmware.com/csi.sock  \u003cnil\u003e 0 \u003cnil\u003e}] \u003cnil\u003e \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:33:00.124864069Z"}
{"log":"I0208 09:33:00.124822 3212256 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\n","stream":"stderr","time":"2022-02-08T09:33:00.12487637Z"}
{"log":"I0208 09:33:00.124876 3212256 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick\n","stream":"stderr","time":"2022-02-08T09:33:00.124925708Z"}
{"log":"I0208 09:33:00.124932 3212256 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc002a6bfb0, {CONNECTING \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:33:00.124992425Z"}
{"log":"I0208 09:33:00.125262 3212256 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc002a6bfb0, {READY \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:33:00.125394816Z"}
{"log":"I0208 09:33:00.126041 3212256 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\n","stream":"stderr","time":"2022-02-08T09:33:00.126210527Z"}
{"log":"I0208 09:33:00.323256 3212256 clientconn.go:106] parsed scheme: \"\"\n","stream":"stderr","time":"2022-02-08T09:33:00.323433612Z"}
{"log":"I0208 09:33:00.323289 3212256 clientconn.go:106] scheme \"\" not registered, fallback to default scheme\n","stream":"stderr","time":"2022-02-08T09:33:00.323465696Z"}
{"log":"I0208 09:33:00.323320 3212256 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins/csi.vsphere.vmware.com/csi.sock  \u003cnil\u003e 0 \u003cnil\u003e}] \u003cnil\u003e \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:33:00.323471097Z"}
{"log":"I0208 09:33:00.323338 3212256 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\n","stream":"stderr","time":"2022-02-08T09:33:00.323476038Z"}
{"log":"I0208 09:33:00.323409 3212256 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick\n","stream":"stderr","time":"2022-02-08T09:33:00.32358416Z"}
{"log":"I0208 09:33:00.323458 3212256 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc0051577f0, {CONNECTING \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:33:00.3236086Z"}
{"log":"I0208 09:33:00.323919 3212256 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc0051577f0, {READY \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:33:00.324025307Z"}
{"log":"I0208 09:33:00.327881 3212256 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\n","stream":"stderr","time":"2022-02-08T09:33:00.327979497Z"}
{"log":"E0208 09:33:00.328045 3212256 nestedpendingoperations.go:301] Operation for \"{volumeName:kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480 podName: nodeName:}\" failed. No retries permitted until 2022-02-08 09:33:00.828001326 +0000 UTC m=+4665366.752029175 (durationBeforeRetry 500ms). Error: \"MountVolume.SetUp failed for volume \\\"pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c\\\" (UniqueName: \\\"kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480\\\") pod \\\"dev-loki-0\\\" (UID: \\\"144ca723-4a9f-4010-8318-481077b3ec62\\\") : rpc error: code = FailedPrecondition desc = volume ID: \\\"3de5cd7b-693f-41d3-9820-ea9c6ea32480\\\" does not appear staged to \\\"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/globalmount\\\"\"\n","stream":"stderr","time":"2022-02-08T09:33:00.328170711Z"}
{"log":"I0208 09:33:00.368797 3212256 reconciler.go:196] \"operationExecutor.UnmountVolume started for volume \\\"pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c\\\" (UniqueName: \\\"kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480\\\") pod \\\"62400cd4-61f7-4237-8961-7cb8a1e150e9\\\" (UID: \\\"62400cd4-61f7-4237-8961-7cb8a1e150e9\\\") \"\n","stream":"stderr","time":"2022-02-08T09:33:00.368972605Z"}
{"log":"I0208 09:33:00.369006 3212256 clientconn.go:106] parsed scheme: \"\"\n","stream":"stderr","time":"2022-02-08T09:33:00.369085551Z"}
{"log":"I0208 09:33:00.369030 3212256 clientconn.go:106] scheme \"\" not registered, fallback to default scheme\n","stream":"stderr","time":"2022-02-08T09:33:00.369101412Z"}
{"log":"I0208 09:33:00.369087 3212256 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins/csi.vsphere.vmware.com/csi.sock  \u003cnil\u003e 0 \u003cnil\u003e}] \u003cnil\u003e \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:33:00.369200311Z"}
{"log":"I0208 09:33:00.369099 3212256 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\n","stream":"stderr","time":"2022-02-08T09:33:00.369233729Z"}
{"log":"I0208 09:33:00.369157 3212256 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick\n","stream":"stderr","time":"2022-02-08T09:33:00.369243326Z"}
{"log":"I0208 09:33:00.369237 3212256 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc0014604e0, {CONNECTING \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:33:00.369309339Z"}
{"log":"I0208 09:33:00.369632 3212256 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc0014604e0, {READY \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:33:00.369827049Z"}
{"log":"E0208 09:33:00.378553 3212256 nestedpendingoperations.go:301] Operation for \"{volumeName:kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480 podName:62400cd4-61f7-4237-8961-7cb8a1e150e9 nodeName:}\" failed. No retries permitted until 2022-02-08 09:33:00.878513061 +0000 UTC m=+4665366.802540916 (durationBeforeRetry 500ms). Error: \"UnmountVolume.TearDown failed for volume \\\"pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c\\\" (UniqueName: \\\"kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480\\\") pod \\\"62400cd4-61f7-4237-8961-7cb8a1e150e9\\\" (UID: \\\"62400cd4-61f7-4237-8961-7cb8a1e150e9\\\") : kubernetes.io/csi: mounter.TearDownAt failed: rpc error: code = Internal desc = Unmount failed: rpc error: code = Internal desc = error getting block device for volume: 3de5cd7b-693f-41d3-9820-ea9c6ea32480, err: lstat /dev/sdb: no such file or directory\\nUnmounting arguments: /var/lib/kubelet/pods/62400cd4-61f7-4237-8961-7cb8a1e150e9/volumes/kubernetes.io~csi/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/mount\\n\"\n","stream":"stderr","time":"2022-02-08T09:33:00.378648137Z"}
{"log":"I0208 09:33:00.378582 3212256 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\n","stream":"stderr","time":"2022-02-08T09:33:00.378668564Z"}
{"log":"I0208 09:33:00.471280 3212256 reconciler.go:269] \"operationExecutor.MountVolume started for volume \\\"pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c\\\" (UniqueName: \\\"kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480\\\") pod \\\"dev-loki-0\\\" (UID: \\\"144ca723-4a9f-4010-8318-481077b3ec62\\\") \"\n","stream":"stderr","time":"2022-02-08T09:33:00.471727967Z"}
{"log":"I0208 09:33:00.471599 3212256 operation_generator.go:575] MountVolume.WaitForAttach entering for volume \"pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c\" (UniqueName: \"kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480\") pod \"dev-loki-0\" (UID: \"144ca723-4a9f-4010-8318-481077b3ec62\") DevicePath \"\"\n","stream":"stderr","time":"2022-02-08T09:33:00.471749976Z"}
{"log":"I0208 09:33:00.524311 3212256 operation_generator.go:585] MountVolume.WaitForAttach succeeded for volume \"pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c\" (UniqueName: \"kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480\") pod \"dev-loki-0\" (UID: \"144ca723-4a9f-4010-8318-481077b3ec62\") DevicePath \"csi-5ae5f2da9cf125e8ad6c5ca3076ee4d369e25f2fd7061fd92d0d7fe9a3a1f2ec\"\n","stream":"stderr","time":"2022-02-08T09:33:00.524523157Z"}
{"log":"I0208 09:33:00.524432 3212256 clientconn.go:106] parsed scheme: \"\"\n","stream":"stderr","time":"2022-02-08T09:33:00.524557038Z"}
{"log":"I0208 09:33:00.524450 3212256 clientconn.go:106] scheme \"\" not registered, fallback to default scheme\n","stream":"stderr","time":"2022-02-08T09:33:00.524562495Z"}
{"log":"I0208 09:33:00.524491 3212256 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins/csi.vsphere.vmware.com/csi.sock  \u003cnil\u003e 0 \u003cnil\u003e}] \u003cnil\u003e \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:33:00.524618933Z"}
{"log":"I0208 09:33:00.524511 3212256 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\n","stream":"stderr","time":"2022-02-08T09:33:00.524660585Z"}
{"log":"I0208 09:33:00.524578 3212256 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick\n","stream":"stderr","time":"2022-02-08T09:33:00.524667018Z"}
{"log":"I0208 09:33:00.524638 3212256 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc0024598b0, {CONNECTING \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:33:00.524844811Z"}
{"log":"I0208 09:33:00.525356 3212256 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc0024598b0, {READY \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:33:00.525540345Z"}
{"log":"I0208 09:33:00.526425 3212256 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\n","stream":"stderr","time":"2022-02-08T09:33:00.526549933Z"}
{"log":"I0208 09:33:00.724346 3212256 clientconn.go:106] parsed scheme: \"\"\n","stream":"stderr","time":"2022-02-08T09:33:00.724653072Z"}
{"log":"I0208 09:33:00.724384 3212256 clientconn.go:106] scheme \"\" not registered, fallback to default scheme\n","stream":"stderr","time":"2022-02-08T09:33:00.724698978Z"}
{"log":"I0208 09:33:00.724420 3212256 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins/csi.vsphere.vmware.com/csi.sock  \u003cnil\u003e 0 \u003cnil\u003e}] \u003cnil\u003e \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:33:00.72470465Z"}
{"log":"I0208 09:33:00.724432 3212256 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\n","stream":"stderr","time":"2022-02-08T09:33:00.72470985Z"}
{"log":"I0208 09:33:00.724502 3212256 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick\n","stream":"stderr","time":"2022-02-08T09:33:00.724728576Z"}
{"log":"I0208 09:33:00.724605 3212256 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc003738f10, {CONNECTING \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:33:00.7247356Z"}
{"log":"I0208 09:33:00.725220 3212256 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc003738f10, {READY \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:33:00.725371456Z"}
{"log":"I0208 09:33:00.730512 3212256 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\n","stream":"stderr","time":"2022-02-08T09:33:00.730688824Z"}
{"log":"E0208 09:33:00.730683 3212256 nestedpendingoperations.go:301] Operation for \"{volumeName:kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480 podName: nodeName:}\" failed. No retries permitted until 2022-02-08 09:33:01.230640211 +0000 UTC m=+4665367.154668046 (durationBeforeRetry 500ms). Error: \"MountVolume.SetUp failed for volume \\\"pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c\\\" (UniqueName: \\\"kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480\\\") pod \\\"dev-loki-0\\\" (UID: \\\"144ca723-4a9f-4010-8318-481077b3ec62\\\") : rpc error: code = FailedPrecondition desc = volume ID: \\\"3de5cd7b-693f-41d3-9820-ea9c6ea32480\\\" does not appear staged to \\\"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/globalmount\\\"\"\n","stream":"stderr","time":"2022-02-08T09:33:00.730861004Z"}
{"log":"I0208 09:33:00.774278 3212256 reconciler.go:196] \"operationExecutor.UnmountVolume started for volume \\\"pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c\\\" (UniqueName: \\\"kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480\\\") pod \\\"62400cd4-61f7-4237-8961-7cb8a1e150e9\\\" (UID: \\\"62400cd4-61f7-4237-8961-7cb8a1e150e9\\\") \"\n","stream":"stderr","time":"2022-02-08T09:33:00.77459205Z"}
{"log":"I0208 09:33:00.774464 3212256 clientconn.go:106] parsed scheme: \"\"\n","stream":"stderr","time":"2022-02-08T09:33:00.774608507Z"}
{"log":"I0208 09:33:00.774492 3212256 clientconn.go:106] scheme \"\" not registered, fallback to default scheme\n","stream":"stderr","time":"2022-02-08T09:33:00.774679705Z"}
{"log":"I0208 09:33:00.774517 3212256 reconciler.go:196] \"operationExecutor.UnmountVolume started for volume \\\"pvc-f203d574-4869-429f-9eb2-126781ef9fcd\\\" (UniqueName: \\\"kubernetes.io/csi/csi.vsphere.vmware.com^a5e92d59-69bf-4808-b3aa-f15ee18268b4\\\") pod \\\"8a2561e2-370a-4fd0-89ee-6f56ebabcf4c\\\" (UID: \\\"8a2561e2-370a-4fd0-89ee-6f56ebabcf4c\\\") \"\n","stream":"stderr","time":"2022-02-08T09:33:00.774686849Z"}
{"log":"I0208 09:33:00.774542 3212256 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins/csi.vsphere.vmware.com/csi.sock  \u003cnil\u003e 0 \u003cnil\u003e}] \u003cnil\u003e \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:33:00.774691807Z"}
{"log":"I0208 09:33:00.774564 3212256 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\n","stream":"stderr","time":"2022-02-08T09:33:00.774703684Z"}
{"log":"I0208 09:33:00.774620 3212256 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick\n","stream":"stderr","time":"2022-02-08T09:33:00.774711006Z"}
{"log":"I0208 09:33:00.774654 3212256 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc0037392f0, {CONNECTING \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:33:00.774754427Z"}
{"log":"I0208 09:33:00.774705 3212256 clientconn.go:106] parsed scheme: \"\"\n","stream":"stderr","time":"2022-02-08T09:33:00.774769038Z"}
{"log":"I0208 09:33:00.774730 3212256 clientconn.go:106] scheme \"\" not registered, fallback to default scheme\n","stream":"stderr","time":"2022-02-08T09:33:00.774789559Z"}
{"log":"I0208 09:33:00.774768 3212256 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins/csi.vsphere.vmware.com/csi.sock  \u003cnil\u003e 0 \u003cnil\u003e}] \u003cnil\u003e \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:33:00.774833033Z"}
{"log":"I0208 09:33:00.774795 3212256 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\n","stream":"stderr","time":"2022-02-08T09:33:00.774861248Z"}
{"log":"I0208 09:33:00.774830 3212256 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick\n","stream":"stderr","time":"2022-02-08T09:33:00.774866757Z"}
{"log":"I0208 09:33:00.774903 3212256 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc002eb2060, {CONNECTING \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:33:00.774986324Z"}
{"log":"I0208 09:33:00.775235 3212256 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc002eb2060, {READY \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:33:00.775310918Z"}
{"log":"I0208 09:33:00.775243 3212256 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc0037392f0, {READY \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:33:00.77532109Z"}
{"log":"E0208 09:33:00.782720 3212256 nestedpendingoperations.go:301] Operation for \"{volumeName:kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480 podName:62400cd4-61f7-4237-8961-7cb8a1e150e9 nodeName:}\" failed. No retries permitted until 2022-02-08 09:33:01.282679092 +0000 UTC m=+4665367.206706942 (durationBeforeRetry 500ms). Error: \"UnmountVolume.TearDown failed for volume \\\"pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c\\\" (UniqueName: \\\"kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480\\\") pod \\\"62400cd4-61f7-4237-8961-7cb8a1e150e9\\\" (UID: \\\"62400cd4-61f7-4237-8961-7cb8a1e150e9\\\") : kubernetes.io/csi: mounter.TearDownAt failed: rpc error: code = Internal desc = Unmount failed: rpc error: code = Internal desc = error getting block device for volume: 3de5cd7b-693f-41d3-9820-ea9c6ea32480, err: lstat /dev/sdb: no such file or directory\\nUnmounting arguments: /var/lib/kubelet/pods/62400cd4-61f7-4237-8961-7cb8a1e150e9/volumes/kubernetes.io~csi/pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c/mount\\n\"\n","stream":"stderr","time":"2022-02-08T09:33:00.782818849Z"}
{"log":"I0208 09:33:00.782750 3212256 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\n","stream":"stderr","time":"2022-02-08T09:33:00.782843322Z"}
{"log":"E0208 09:33:00.783671 3212256 nestedpendingoperations.go:301] Operation for \"{volumeName:kubernetes.io/csi/csi.vsphere.vmware.com^a5e92d59-69bf-4808-b3aa-f15ee18268b4 podName:8a2561e2-370a-4fd0-89ee-6f56ebabcf4c nodeName:}\" failed. No retries permitted until 2022-02-08 09:35:02.783636307 +0000 UTC m=+4665488.707664158 (durationBeforeRetry 2m2s). Error: \"UnmountVolume.TearDown failed for volume \\\"pvc-f203d574-4869-429f-9eb2-126781ef9fcd\\\" (UniqueName: \\\"kubernetes.io/csi/csi.vsphere.vmware.com^a5e92d59-69bf-4808-b3aa-f15ee18268b4\\\") pod \\\"8a2561e2-370a-4fd0-89ee-6f56ebabcf4c\\\" (UID: \\\"8a2561e2-370a-4fd0-89ee-6f56ebabcf4c\\\") : kubernetes.io/csi: mounter.TearDownAt failed: rpc error: code = Internal desc = Unmount failed: rpc error: code = Internal desc = error getting block device for volume: a5e92d59-69bf-4808-b3aa-f15ee18268b4, err: lstat /dev/sdc: no such file or directory\\nUnmounting arguments: /var/lib/kubelet/pods/8a2561e2-370a-4fd0-89ee-6f56ebabcf4c/volumes/kubernetes.io~csi/pvc-f203d574-4869-429f-9eb2-126781ef9fcd/mount\\n\"\n","stream":"stderr","time":"2022-02-08T09:33:00.783769847Z"}
{"log":"I0208 09:33:00.783706 3212256 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\n","stream":"stderr","time":"2022-02-08T09:33:00.783784522Z"}
{"log":"I0208 09:33:00.875322 3212256 reconciler.go:269] \"operationExecutor.MountVolume started for volume \\\"pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c\\\" (UniqueName: \\\"kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480\\\") pod \\\"dev-loki-0\\\" (UID: \\\"144ca723-4a9f-4010-8318-481077b3ec62\\\") \"\n","stream":"stderr","time":"2022-02-08T09:33:00.875528958Z"}
{"log":"I0208 09:33:00.875827 3212256 operation_generator.go:575] MountVolume.WaitForAttach entering for volume \"pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c\" (UniqueName: \"kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480\") pod \"dev-loki-0\" (UID: \"144ca723-4a9f-4010-8318-481077b3ec62\") DevicePath \"\"\n","stream":"stderr","time":"2022-02-08T09:33:00.875987847Z"}
{"log":"I0208 09:33:00.927878 3212256 operation_generator.go:585] MountVolume.WaitForAttach succeeded for volume \"pvc-9c61b68f-73a0-45ef-a610-35e1e8c43b9c\" (UniqueName: \"kubernetes.io/csi/csi.vsphere.vmware.com^3de5cd7b-693f-41d3-9820-ea9c6ea32480\") pod \"dev-loki-0\" (UID: \"144ca723-4a9f-4010-8318-481077b3ec62\") DevicePath \"csi-5ae5f2da9cf125e8ad6c5ca3076ee4d369e25f2fd7061fd92d0d7fe9a3a1f2ec\"\n","stream":"stderr","time":"2022-02-08T09:33:00.928021986Z"}
{"log":"I0208 09:33:00.927963 3212256 clientconn.go:106] parsed scheme: \"\"\n","stream":"stderr","time":"2022-02-08T09:33:00.9280761Z"}
{"log":"I0208 09:33:00.927973 3212256 clientconn.go:106] scheme \"\" not registered, fallback to default scheme\n","stream":"stderr","time":"2022-02-08T09:33:00.928082679Z"}
{"log":"I0208 09:33:00.928005 3212256 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins/csi.vsphere.vmware.com/csi.sock  \u003cnil\u003e 0 \u003cnil\u003e}] \u003cnil\u003e \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:33:00.928089618Z"}
{"log":"I0208 09:33:00.928019 3212256 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\n","stream":"stderr","time":"2022-02-08T09:33:00.928094642Z"}
{"log":"I0208 09:33:00.928077 3212256 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick\n","stream":"stderr","time":"2022-02-08T09:33:00.928460227Z"}
{"log":"I0208 09:33:00.928356 3212256 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc003739c40, {CONNECTING \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:33:00.928475919Z"}
{"log":"I0208 09:33:00.928631 3212256 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc003739c40, {READY \u003cnil\u003e}\n","stream":"stderr","time":"2022-02-08T09:33:00.928980515Z"}
{"log":"I0208 09:33:00.929636 3212256 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\n","stream":"stderr","time":"2022-02-08T09:33:00.929756827Z"}
{"log":"I0208 09:33:01.133714 3212256 clientconn.go:106] parsed scheme: \"\"\n","stream":"stderr","time":"2022-02-08T09:33:01.133899439Z"}
Invader007 commented 2 years ago

I'm wondering if this log entry could be right: "Datacenter: Datacenter [Datacenter: Datacenter:datacenter-21, VirtualCenterHost: 10.22.34.1]]"

I mean... Datacenter as Key:value list where the Datacenter value is a key:value string?

NareshKoduru commented 2 years ago

Hello, We are seeing similar errors in our production clusters where the pods are stuck in the INIT phase and a manual delete will fix the problem for us. Example: Warning FailedMount 6m5s (x30 over 61m) kubelet MountVolume.SetUp failed for volume "pvc-c322045b-b088-4a94-ac74-233064f96190" : rpc error: code = FailedPrecondition desc = Volume ID: "0b39c9bd-577b-4add-a48a-802357b42d30" does not appear staged to "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c322045b-b088-4a94-ac74-233064f96190/globalmount"

Looking further, we noticed that this is happening when the pod that is killed or restarted gets placed onto the same node that it was previously on.

We looked into the vsphere-csi-node logs on that node where the pods are scheduled and having this issue and observed that the NodeStageVolume call is not logged(may be this step is not executed/missed ?). NodeUnpublishVolume and NodeUnstageVolume were executed successfully and then we see logs of NodePublishVolume without NodeStageVolume call.

Environment: csi-vsphere version: v2.2.1 vsphere-csi-node: v2.2.1 Kubernetes version: v1.20.11 vSphere version: 7.0.3.00300 OS (e.g. from /etc/os-release): Flatcar Container Linux by Kinvolk 2905.2.6 (Oklo) Kernel (e.g. uname -a): 5.10.75-flatcar

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

mvillarjuan commented 2 years ago

Hello. We are experiencing this issue in one of our production clusters. One pod is getting stuck at Initstatus and we are getting the following Event:

MountVolume.SetUp failed for volume "pvc-e6043fb3-0490-45ca-985d-e8972a0b3fc3" : rpc error: code = FailedPrecondition desc = Volume ID: "30a7a050-6c14-4b0a-80f0-adf5bd1c0c16" does not appear staged to "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e6043fb3-0490-45ca-985d-e8972a0b3fc3/globalmount"

Killing the pod usually fixes this issue.

Environment: csi-vsphere version: v2.1.0_vmware.1 vsphere-csi-node: v2.1.0_vmware.1 Kubernetes version: v1.21.8 vSphere version: 7.0.3.00300 OS (e.g. from /etc/os-release): Ubuntu 20.04.3 LTS (Focal Fossa) Kernel (e.g. uname -a): 5.4.0-96-generic #109-Ubuntu SMP Wed Jan 12 16:49:16 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

divyenpatel commented 1 year ago

/remove-lifecycle stale

divyenpatel commented 1 year ago

/remove-lifecycle rotten

papanito commented 1 year ago

We also experience this issue

divyenpatel commented 1 year ago

This issue should be fixed in vSphere CSI Driver 3.0 Refer to https://github.com/kubernetes-sigs/vsphere-csi-driver/issues/2242#issuecomment-1471524820

divyenpatel commented 1 year ago

https://github.com/kubernetes-sigs/vsphere-csi-driver/releases/tag/v3.0.0 is released with the fix for this issue.

divyenpatel commented 11 months ago

@mohideen @papanito @mvillarjuan @NareshKoduru @LinAnt @bortek @Anil-YadavK8s

by any chance, can you confirm if you have multi path device mapper enabled on node VMs where you are facing this issue?

If that is the case, we need to reopen this issue, as we don't have fix and support for multi path device mapper yet in the vSphere CSI Driver.

@dobsonj is helping to fix this issue - https://github.com/kubernetes-sigs/vsphere-csi-driver/pull/2593

brngates98 commented 7 months ago

hate to raise a dead horse but i am having this exact issue on vsphere-csi-driver 3.0.1

zfrhv commented 7 months ago

I had similar problem, in my case there was multipathd enabled, which caused the csi to attempt to mount 2 links of the same disk to the same path.

idk if we have old version or something (cuz we are running on openshift, and its redhat version of this driver) but disabling multipathd solved the problem.

brngates98 commented 7 months ago

Interesting i am using Ubuntu Server 22.04 Nodes, Multipath is indeed enabled also....

I will attempt to disable multipath and try again.

image

jrife commented 5 months ago

I'm seeing this multipath issue in version 3.1.1 as well.