vultr / vultr-csi

Container Storage Interface (CSI) Driver for Vultr Block Storage
Apache License 2.0
40 stars 17 forks source link

[BUG] - Volumes not attaching after VKE upgrade #101

Closed kallisti5 closed 1 year ago

kallisti5 commented 1 year ago

Describe the bug Upgraded from 1.23.x to the latest 1.23.x VKE.

After the upgrade, volumes stopped attaching to pods. "rpc error: code = NotFound desc = cannot get volume: gave up after 1 attempts, last error : context deadline exceeded"

To Reproduce Existing workloads, unmodified.

Upgrade from 1.23.8? (I think) to 1.23.14 (latest 1.23.x)

Expected behavior Continued functionality

Screenshots k8s_error vultr_volumes k8s_event_failure

kallisti5 commented 1 year ago

Here's the kubelet log from the node attempting to bring up the pod:

Dec 22 14:55:51 ams-hf-prod-ce8b267e9400 kubelet[2896]: I1222 14:55:51.377352    2896 kubelet.go:2096] "SyncLoop ADD" source="api" pods=[ukiyo-cafe/postgres-74dd68dd9b-4rkwr]
Dec 22 14:55:51 ams-hf-prod-ce8b267e9400 kubelet[2896]: I1222 14:55:51.377404    2896 topology_manager.go:205] "Topology Admit Handler"
Dec 22 14:55:51 ams-hf-prod-ce8b267e9400 kubelet[2896]: I1222 14:55:51.448800    2896 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5cl8\" (UniqueName: \"kubernetes.io/projected/f33e17c8-7fa3-4137-a69d-bf9be1163dd3-kube-api-access-j5cl8\") pod \"postgres-74dd68dd9b-4rkwr\" (UID: \"f33e17c8-7fa3-4137-a69d-bf9be1163dd3\") " pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 14:55:51 ams-hf-prod-ce8b267e9400 kubelet[2896]: I1222 14:55:51.549406    2896 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-j5cl8\" (UniqueName: \"kubernetes.io/projected/f33e17c8-7fa3-4137-a69d-bf9be1163dd3-kube-api-access-j5cl8\") pod \"postgres-74dd68dd9b-4rkwr\" (UID: \"f33e17c8-7fa3-4137-a69d-bf9be1163dd3\") " pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 14:55:51 ams-hf-prod-ce8b267e9400 kubelet[2896]: I1222 14:55:51.560392    2896 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5cl8\" (UniqueName: \"kubernetes.io/projected/f33e17c8-7fa3-4137-a69d-bf9be1163dd3-kube-api-access-j5cl8\") pod \"postgres-74dd68dd9b-4rkwr\" (UID: \"f33e17c8-7fa3-4137-a69d-bf9be1163dd3\") " pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 14:56:29 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 14:56:29.751311    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[redis-data], unattached volumes=[redis-data kube-api-access-zdsgw]: timed out waiting for the condition" pod="ukiyo-cafe/redis-54574755f9-hlhr8"
Dec 22 14:56:29 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 14:56:29.751350    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[redis-data], unattached volumes=[redis-data kube-api-access-zdsgw]: timed out waiting for the condition" pod="ukiyo-cafe/redis-54574755f9-hlhr8" podUID=5b97919a-cf6e-404b-b2ae-01c8e789eaa3
Dec 22 14:56:30 ams-hf-prod-ce8b267e9400 kubelet[2896]: I1222 14:56:30.134620    2896 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdsgw\" (UniqueName: \"kubernetes.io/projected/5b97919a-cf6e-404b-b2ae-01c8e789eaa3-kube-api-access-zdsgw\") pod \"5b97919a-cf6e-404b-b2ae-01c8e789eaa3\" (UID: \"5b97919a-cf6e-404b-b2ae-01c8e789eaa3\") "
Dec 22 14:56:30 ams-hf-prod-ce8b267e9400 kubelet[2896]: I1222 14:56:30.156217    2896 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b97919a-cf6e-404b-b2ae-01c8e789eaa3-kube-api-access-zdsgw" (OuterVolumeSpecName: "kube-api-access-zdsgw") pod "5b97919a-cf6e-404b-b2ae-01c8e789eaa3" (UID: "5b97919a-cf6e-404b-b2ae-01c8e789eaa3"). InnerVolumeSpecName "kube-api-access-zdsgw". PluginName "kubernetes.io/projected", VolumeGidValue ""
Dec 22 14:56:30 ams-hf-prod-ce8b267e9400 kubelet[2896]: I1222 14:56:30.235745    2896 reconciler.go:399] "Volume detached for volume \"kube-api-access-zdsgw\" (UniqueName: \"kubernetes.io/projected/5b97919a-cf6e-404b-b2ae-01c8e789eaa3-kube-api-access-zdsgw\") on node \"ams-hf-prod-ce8b267e9400\" DevicePath \"\""
Dec 22 14:56:31 ams-hf-prod-ce8b267e9400 kubelet[2896]: I1222 14:56:31.050957    2896 kubelet.go:2112] "SyncLoop DELETE" source="api" pods=[ukiyo-cafe/redis-54574755f9-hlhr8]
Dec 22 14:56:31 ams-hf-prod-ce8b267e9400 kubelet[2896]: I1222 14:56:31.054152    2896 kubelet.go:2106] "SyncLoop REMOVE" source="api" pods=[ukiyo-cafe/redis-54574755f9-hlhr8]
Dec 22 14:56:31 ams-hf-prod-ce8b267e9400 kubelet[2896]: I1222 14:56:31.752540    2896 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=5b97919a-cf6e-404b-b2ae-01c8e789eaa3 path="/var/lib/kubelet/pods/5b97919a-cf6e-404b-b2ae-01c8e789eaa3/volumes"
Dec 22 14:57:54 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 14:57:54.379692    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 14:57:54 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 14:57:54.379735    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 15:00:09 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:00:09.750728    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[kube-api-access-j5cl8 postgres-data]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 15:00:09 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:00:09.750762    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[kube-api-access-j5cl8 postgres-data]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 15:02:27 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:02:27.751397    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 15:02:27 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:02:27.751437    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 15:04:42 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:04:42.751492    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 15:04:42 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:04:42.751530    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 15:07:00 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:07:00.752045    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 15:07:00 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:07:00.752084    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 15:09:18 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:09:18.751196    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 15:09:18 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:09:18.751243    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 15:11:34 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:11:34.751517    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 15:11:34 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:11:34.752120    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 15:13:49 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:13:49.750787    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 15:13:49 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:13:49.751364    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 15:16:03 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:16:03.750624    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 15:16:03 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:16:03.751341    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 15:18:20 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:18:20.751331    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[kube-api-access-j5cl8 postgres-data]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 15:18:20 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:18:20.752038    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[kube-api-access-j5cl8 postgres-data]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 15:20:35 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:20:35.750266    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 15:20:35 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:20:35.751106    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 15:22:53 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:22:53.750778    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 15:22:53 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:22:53.751365    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 15:25:08 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:25:08.750374    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 15:25:08 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:25:08.750412    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 15:27:26 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:27:26.751384    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 15:27:26 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:27:26.751420    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 15:29:43 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:29:43.750718    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 15:29:43 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:29:43.751375    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 15:31:57 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:31:57.751545    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 15:31:57 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:31:57.751578    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 15:34:11 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:34:11.750485    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[kube-api-access-j5cl8 postgres-data]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 15:34:11 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:34:11.751109    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[kube-api-access-j5cl8 postgres-data]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 15:36:26 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:36:26.751266    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 15:36:26 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:36:26.751312    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 15:38:44 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:38:44.750617    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 15:38:44 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:38:44.750657    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 15:40:59 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:40:59.750593    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 15:40:59 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:40:59.751233    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 15:43:15 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:43:15.751638    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 15:43:15 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:43:15.752229    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 15:45:32 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:45:32.750614    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 15:45:32 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:45:32.750656    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 15:47:46 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:47:46.750857    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 15:47:46 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:47:46.750889    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 15:50:02 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:50:02.751809    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 15:50:02 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:50:02.751861    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 15:52:19 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:52:19.750571    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[kube-api-access-j5cl8 postgres-data]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 15:52:19 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:52:19.750602    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[kube-api-access-j5cl8 postgres-data]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 15:54:36 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:54:36.751491    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 15:54:36 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:54:36.751533    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 15:56:51 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:56:51.750728    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 15:56:51 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:56:51.750757    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 15:59:08 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:59:08.751762    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[kube-api-access-j5cl8 postgres-data]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 15:59:08 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 15:59:08.752318    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[kube-api-access-j5cl8 postgres-data]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 16:01:26 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 16:01:26.751545    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[kube-api-access-j5cl8 postgres-data]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 16:01:26 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 16:01:26.751582    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[kube-api-access-j5cl8 postgres-data]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 16:03:44 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 16:03:44.750586    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 16:03:44 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 16:03:44.750636    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 16:06:01 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 16:06:01.750293    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[kube-api-access-j5cl8 postgres-data]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 16:06:01 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 16:06:01.750913    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[kube-api-access-j5cl8 postgres-data]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 16:08:19 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 16:08:19.750783    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 16:08:19 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 16:08:19.751434    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 16:10:33 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 16:10:33.750753    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[kube-api-access-j5cl8 postgres-data]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 16:10:33 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 16:10:33.751434    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[kube-api-access-j5cl8 postgres-data]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 16:12:49 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 16:12:49.751054    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 16:12:49 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 16:12:49.751888    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 16:15:03 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 16:15:03.751506    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 16:15:03 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 16:15:03.752262    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 16:15:25 ams-hf-prod-ce8b267e9400 kubelet[2896]: I1222 16:15:25.555191    2896 kubelet.go:2096] "SyncLoop ADD" source="api" pods=[kube-system/node-shell-adee0a6a-0da4-47e3-82ca-5246c4c5eb55]
Dec 22 16:15:25 ams-hf-prod-ce8b267e9400 kubelet[2896]: I1222 16:15:25.555259    2896 topology_manager.go:205] "Topology Admit Handler"
Dec 22 16:15:25 ams-hf-prod-ce8b267e9400 kubelet[2896]: I1222 16:15:25.617608    2896 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c49dx\" (UniqueName: \"kubernetes.io/projected/701c32a3-709e-4210-8f23-5a6fc7cb3044-kube-api-access-c49dx\") pod \"node-shell-adee0a6a-0da4-47e3-82ca-5246c4c5eb55\" (UID: \"701c32a3-709e-4210-8f23-5a6fc7cb3044\") " pod="kube-system/node-shell-adee0a6a-0da4-47e3-82ca-5246c4c5eb55"
Dec 22 16:15:25 ams-hf-prod-ce8b267e9400 kubelet[2896]: I1222 16:15:25.718758    2896 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-c49dx\" (UniqueName: \"kubernetes.io/projected/701c32a3-709e-4210-8f23-5a6fc7cb3044-kube-api-access-c49dx\") pod \"node-shell-adee0a6a-0da4-47e3-82ca-5246c4c5eb55\" (UID: \"701c32a3-709e-4210-8f23-5a6fc7cb3044\") " pod="kube-system/node-shell-adee0a6a-0da4-47e3-82ca-5246c4c5eb55"
Dec 22 16:15:25 ams-hf-prod-ce8b267e9400 kubelet[2896]: I1222 16:15:25.728248    2896 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-c49dx\" (UniqueName: \"kubernetes.io/projected/701c32a3-709e-4210-8f23-5a6fc7cb3044-kube-api-access-c49dx\") pod \"node-shell-adee0a6a-0da4-47e3-82ca-5246c4c5eb55\" (UID: \"701c32a3-709e-4210-8f23-5a6fc7cb3044\") " pod="kube-system/node-shell-adee0a6a-0da4-47e3-82ca-5246c4c5eb55"
Dec 22 16:15:25 ams-hf-prod-ce8b267e9400 kubelet[2896]: I1222 16:15:25.858357    2896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/node-shell-adee0a6a-0da4-47e3-82ca-5246c4c5eb55"
Dec 22 16:15:25 ams-hf-prod-ce8b267e9400 kubelet[2896]: I1222 16:15:25.996458    2896 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider
Dec 22 16:15:25 ams-hf-prod-ce8b267e9400 kubelet[2896]: I1222 16:15:25.996549    2896 provider.go:82] Docker config file not found: couldn't find valid .dockercfg after checking in [/var/lib/kubelet   /]
Dec 22 16:15:26 ams-hf-prod-ce8b267e9400 kubelet[2896]: I1222 16:15:26.232479    2896 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/node-shell-adee0a6a-0da4-47e3-82ca-5246c4c5eb55" event=&{ID:701c32a3-709e-4210-8f23-5a6fc7cb3044 Type:ContainerStarted Data:a4a1101e6fb9d8104b6b658190189ab23143ae85906f08a2a07d216f41a557f8}
Dec 22 16:15:29 ams-hf-prod-ce8b267e9400 kubelet[2896]: I1222 16:15:29.239951    2896 kubelet.go:2134] "SyncLoop (PLEG): event for pod" pod="kube-system/node-shell-adee0a6a-0da4-47e3-82ca-5246c4c5eb55" event=&{ID:701c32a3-709e-4210-8f23-5a6fc7cb3044 Type:ContainerStarted Data:ccbefd162ed4091bbc35859f526897a3d945142a50729c51ac7cdce71844e74f}
Dec 22 16:17:21 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 16:17:21.750316    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 16:17:21 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 16:17:21.750940    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
Dec 22 16:19:36 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 16:19:36.751244    2896 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr"
Dec 22 16:19:36 ams-hf-prod-ce8b267e9400 kubelet[2896]: E1222 16:19:36.751895    2896 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[postgres-data], unattached volumes=[postgres-data kube-api-access-j5cl8]: timed out waiting for the condition" pod="ukiyo-cafe/postgres-74dd68dd9b-4rkwr" podUID=f33e17c8-7fa3-4137-a69d-bf9be1163dd3
kallisti5 commented 1 year ago

csi-attacher:

I1222 16:27:30.773193       1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I1222 16:27:30.773209       1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I1222 16:27:30.773262       1 controller.go:208] Started VA processing "csi-4758eb0c71ed6a04921b399c958b578cdeed1e806bb02bd0503a2d2e851f6a4d"
I1222 16:27:30.773271       1 csi_handler.go:221] CSIHandler: processing VA "csi-4758eb0c71ed6a04921b399c958b578cdeed1e806bb02bd0503a2d2e851f6a4d"
I1222 16:27:30.773274       1 csi_handler.go:272] Starting detach operation for "csi-4758eb0c71ed6a04921b399c958b578cdeed1e806bb02bd0503a2d2e851f6a4d"
I1222 16:27:30.773293       1 csi_handler.go:279] Detaching "csi-4758eb0c71ed6a04921b399c958b578cdeed1e806bb02bd0503a2d2e851f6a4d"
I1222 16:27:30.773299       1 csi_handler.go:737] Found NodeID 9650df76-8bb7-4125-9289-787b78d8635d in CSINode ams-hf-prod-ce8b267e9400
I1222 16:27:30.773303       1 controller.go:208] Started VA processing "csi-765830a32f70e0936e6405a23578fcc10f89666f58061abd008e99c38e1c63d1"
I1222 16:27:30.773308       1 connection.go:183] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume
I1222 16:27:30.773310       1 csi_handler.go:221] CSIHandler: processing VA "csi-765830a32f70e0936e6405a23578fcc10f89666f58061abd008e99c38e1c63d1"
I1222 16:27:30.773314       1 csi_handler.go:248] Attaching "csi-765830a32f70e0936e6405a23578fcc10f89666f58061abd008e99c38e1c63d1"
I1222 16:27:30.773317       1 csi_handler.go:418] Starting attach operation for "csi-765830a32f70e0936e6405a23578fcc10f89666f58061abd008e99c38e1c63d1"
I1222 16:27:30.773335       1 csi_handler.go:332] PV finalizer is already set on "pvc-2a4b9b3ec7cf40fb"
I1222 16:27:30.773342       1 csi_handler.go:737] Found NodeID d821c3f7-d105-4de3-91ea-d55669decbe1 in CSINode ams-hf-prod-3f3e3ec512d1
I1222 16:27:30.773348       1 csi_handler.go:301] VA finalizer is already set on "csi-765830a32f70e0936e6405a23578fcc10f89666f58061abd008e99c38e1c63d1"
I1222 16:27:30.773351       1 csi_handler.go:315] NodeID annotation is already set on "csi-765830a32f70e0936e6405a23578fcc10f89666f58061abd008e99c38e1c63d1"
I1222 16:27:30.773313       1 connection.go:184] GRPC request: {"node_id":"9650df76-8bb7-4125-9289-787b78d8635d","volume_id":"26222a8c-e021-4729-86c9-80ab5f6080fd"}
I1222 16:27:30.773357       1 connection.go:183] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I1222 16:27:30.773360       1 connection.go:184] GRPC request: {"node_id":"d821c3f7-d105-4de3-91ea-d55669decbe1","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"storage.kubernetes.io/csiProvisionerIdentity":"1658927074080-8081-block.csi.vultr.com"},"volume_id":"26222a8c-e021-4729-86c9-80ab5f6080fd"}
I1222 16:27:30.773432       1 controller.go:208] Started VA processing "csi-92ea81394e0ef5728263d07b8e5be8d1879d72515ac9c2ca778cb7750d9af663"
I1222 16:27:30.773440       1 csi_handler.go:221] CSIHandler: processing VA "csi-92ea81394e0ef5728263d07b8e5be8d1879d72515ac9c2ca778cb7750d9af663"
I1222 16:27:30.773445       1 csi_handler.go:248] Attaching "csi-92ea81394e0ef5728263d07b8e5be8d1879d72515ac9c2ca778cb7750d9af663"
I1222 16:27:30.773450       1 csi_handler.go:418] Starting attach operation for "csi-92ea81394e0ef5728263d07b8e5be8d1879d72515ac9c2ca778cb7750d9af663"
I1222 16:27:30.773460       1 csi_handler.go:332] PV finalizer is already set on "pvc-47dd7c6e98c04115"
I1222 16:27:30.773465       1 csi_handler.go:737] Found NodeID d821c3f7-d105-4de3-91ea-d55669decbe1 in CSINode ams-hf-prod-3f3e3ec512d1
I1222 16:27:30.773471       1 csi_handler.go:301] VA finalizer is already set on "csi-92ea81394e0ef5728263d07b8e5be8d1879d72515ac9c2ca778cb7750d9af663"
I1222 16:27:30.773475       1 csi_handler.go:315] NodeID annotation is already set on "csi-92ea81394e0ef5728263d07b8e5be8d1879d72515ac9c2ca778cb7750d9af663"
I1222 16:27:30.773479       1 connection.go:183] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I1222 16:27:30.773482       1 connection.go:184] GRPC request: {"node_id":"d821c3f7-d105-4de3-91ea-d55669decbe1","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"storage.kubernetes.io/csiProvisionerIdentity":"1658927074080-8081-block.csi.vultr.com"},"volume_id":"d92d638f-9b5b-4ae5-972f-da5190f348c1"}
I1222 16:27:45.773733       1 connection.go:186] GRPC response: {}
optik-aper commented 1 year ago

@kallisti5 Hello! Thanks for the info on this. We took a look at this and determined the problem was with coredns on your cluster. The CSI controller wasn't able to talk with the pods so we re-initialized coredns and it's working. If you continue to have issues, though, open a support ticket and we can drill in again.

dvcrn commented 1 year ago

Can you provide some context what exactly the issue was here? I'm running into the same issues, also after VKE upgrade. I was also suspecting coredns, but have no clue what exactly, and already reverted my changes

dvcrn commented 1 year ago

I've figured it out, but it took a lot of time to debug

I used the sledgehammer method: Created a new cluster and started dumping yaml files of the new cluster and compared them with my old cluster. There were a few small differences in: svc/kube-dns, deployment/coredns, configmap/coredns

Dump all the yaml files from the freshly created cluster with

kubectl get <name here, like svc/kube-dns> -o yaml -n kube-system

Delete coredns related things:

kubectl delete svc/kube-dns deployment/coredns configmap/coredns -n kube-system
kubectl apply -f <path to yaml files dumped above>

Now CSI is working again. But honestly, this should have been part of the upgrade process, or at least be documented somewhere, because this looks to me like it would break any setup that ran with old coredns configurations. I had an inkling that it was coredns related and this issue here gave me the confirmation to dig deeper.

Hope this is useful to future googlers

optik-aper commented 1 year ago

@dvcrn Thanks for sharing this. It looks like there's a bug with k8s which we'll need to work around in the upgrade process. I hadn't realized it was a widespread issue, but it looks like something we can work around during the upgrade process. I'll re-open this and update it once we have a solution.