kubernetes-sigs / blob-csi-driver

Azure Blob Storage CSI driver
Apache License 2.0
123 stars 83 forks source link

When using CSI FUSE Blob driver to create PVs, unable to restart pods that mount volumes through PVCs #762

Closed ejschoen closed 1 year ago

ejschoen commented 2 years ago

What happened: Using CSI blob fuse driver to statically access blob container as persistent volume. Worked once, and then after pods restarted, no longer works.

What you expected to happen: Expected PVCs to mount into pods.

How to reproduce it: Follow instructions at https://learn.microsoft.com/en-us/azure/aks/azure-csi-blob-storage-static?tabs=secret.
Enable blob CSI driver via EnableBlobCSIDriver feature and az aks create --enable-blob-driver on cluster. Create a deployment that mounts a PVC to a PV to a blob container. Kill all of the pods that are mounting one of the persistent volumes. Pod will fail to start.

Anything else we need to know?: See partial manual workaround at end of issue.

Environment:

CSI blob driver log:

I0928 22:41:32.371813   20049 blob.go:223] driver userAgent: blob.csi.azure.com/v1.16.0 AKS
I0928 22:41:32.372454   20049 azure.go:78] reading cloud config from secret kube-system/azure-cloud-provider
I0928 22:41:32.428180   20049 azure.go:85] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found
I0928 22:41:32.428208   20049 azure.go:90] could not read cloud config from secret kube-system/azure-cloud-provider
I0928 22:41:32.428215   20049 azure.go:93] AZURE_CREDENTIAL_FILE env var set as /etc/kubernetes/azure.json
I0928 22:41:32.428242   20049 azure.go:104] read cloud config from file: /etc/kubernetes/azure.json successfully
I0928 22:41:32.429700   20049 azure_auth.go:245] Using AzurePublicCloud environment
I0928 22:41:32.429743   20049 azure_auth.go:96] azure: using managed identity extension to retrieve access token
I0928 22:41:32.429752   20049 azure_auth.go:102] azure: using User Assigned MSI ID to retrieve access token
I0928 22:41:32.429805   20049 azure_auth.go:113] azure: User Assigned MSI ID is client ID
I0928 22:41:32.429867   20049 azure.go:774] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000
I0928 22:41:32.429941   20049 azure_interfaceclient.go:74] Azure InterfacesClient (read ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.429950   20049 azure_interfaceclient.go:77] Azure InterfacesClient (write ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.429970   20049 azure_vmsizeclient.go:68] Azure VirtualMachineSizesClient (read ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.429977   20049 azure_vmsizeclient.go:71] Azure VirtualMachineSizesClient (write ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.429995   20049 azure_snapshotclient.go:70] Azure SnapshotClient (read ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.430001   20049 azure_snapshotclient.go:73] Azure SnapshotClient (write ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.430013   20049 azure_storageaccountclient.go:70] Azure StorageAccountClient (read ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.430020   20049 azure_storageaccountclient.go:73] Azure StorageAccountClient (write ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.430028   20049 azure_diskclient.go:68] Azure DisksClient using API version: 2021-04-01
I0928 22:41:32.430038   20049 azure_diskclient.go:73] Azure DisksClient (read ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.430045   20049 azure_diskclient.go:76] Azure DisksClient (write ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.430056   20049 azure_vmclient.go:70] Azure VirtualMachine client (read ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.430063   20049 azure_vmclient.go:73] Azure VirtualMachine client (write ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.430077   20049 azure_vmssclient.go:70] Azure VirtualMachineScaleSetClient (read ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.430084   20049 azure_vmssclient.go:73] Azure VirtualMachineScaleSetClient (write ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.430097   20049 azure_vmssvmclient.go:74] Azure vmssVM client (read ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.430105   20049 azure_vmssvmclient.go:77] Azure vmssVM client (write ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.430127   20049 azure_routeclient.go:69] Azure RoutesClient (read ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.430133   20049 azure_routeclient.go:72] Azure RoutesClient (write ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.430145   20049 azure_subnetclient.go:70] Azure SubnetsClient (read ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.430151   20049 azure_subnetclient.go:73] Azure SubnetsClient (write ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.430178   20049 azure_routetableclient.go:69] Azure RouteTablesClient (read ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.430187   20049 azure_routetableclient.go:72] Azure RouteTablesClient (write ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.430206   20049 azure_loadbalancerclient.go:70] Azure LoadBalancersClient (read ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.430213   20049 azure_loadbalancerclient.go:73] Azure LoadBalancersClient (write ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.430227   20049 azure_securitygroupclient.go:70] Azure SecurityGroupsClient (read ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.430234   20049 azure_securitygroupclient.go:73] Azure SecurityGroupsClient (write ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.430250   20049 azure_publicipclient.go:74] Azure PublicIPAddressesClient (read ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.430256   20049 azure_publicipclient.go:77] Azure PublicIPAddressesClient (write ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.430279   20049 azure_blobclient.go:67] Azure BlobClient using API version: 2021-09-01
I0928 22:41:32.430294   20049 azure_vmasclient.go:70] Azure AvailabilitySetsClient (read ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.430300   20049 azure_vmasclient.go:73] Azure AvailabilitySetsClient  (write ops) using rate limit config: QPS=10, bucket=100
I0928 22:41:32.430371   20049 azure.go:1003] attach/detach disk operation rate limit QPS: 6.000000, Bucket: 10
I0928 22:41:32.430387   20049 azure.go:140] starting node server on node(aks-nodepool1-63215908-vmss000000)
I0928 22:41:32.430409   20049 blob.go:228] cloud: AzurePublicCloud, location: eastus, rg: mc_athenarg_athenadevcluster_eastus, VnetName: aks-vnet-23974123, VnetResourceGroup: , SubnetName: aks-subnet
I0928 22:41:32.430543   20049 mount_linux.go:208] Detected OS without systemd
I0928 22:41:32.430554   20049 driver.go:80] Enabling controller service capability: CREATE_DELETE_VOLUME
I0928 22:41:32.430564   20049 driver.go:80] Enabling controller service capability: EXPAND_VOLUME
I0928 22:41:32.430570   20049 driver.go:80] Enabling controller service capability: SINGLE_NODE_MULTI_WRITER
I0928 22:41:32.430578   20049 driver.go:99] Enabling volume access mode: SINGLE_NODE_WRITER
I0928 22:41:32.430584   20049 driver.go:99] Enabling volume access mode: SINGLE_NODE_READER_ONLY
I0928 22:41:32.430590   20049 driver.go:99] Enabling volume access mode: SINGLE_NODE_SINGLE_WRITER
I0928 22:41:32.430595   20049 driver.go:99] Enabling volume access mode: SINGLE_NODE_MULTI_WRITER
I0928 22:41:32.430600   20049 driver.go:99] Enabling volume access mode: MULTI_NODE_READER_ONLY
I0928 22:41:32.430606   20049 driver.go:99] Enabling volume access mode: MULTI_NODE_SINGLE_WRITER
I0928 22:41:32.430611   20049 driver.go:99] Enabling volume access mode: MULTI_NODE_MULTI_WRITER
I0928 22:41:32.430617   20049 driver.go:90] Enabling node service capability: STAGE_UNSTAGE_VOLUME
I0928 22:41:32.430623   20049 driver.go:90] Enabling node service capability: SINGLE_NODE_MULTI_WRITER
I0928 22:41:32.430877   20049 server.go:114] Listening for connections on address: &net.UnixAddr{Name:"//csi/csi.sock", Net:"unix"}
I0928 22:41:33.159216   20049 utils.go:75] GRPC call: /csi.v1.Identity/GetPluginInfo
I0928 22:41:33.159240   20049 utils.go:76] GRPC request: {}
I0928 22:41:33.161586   20049 utils.go:82] GRPC response: {"name":"blob.csi.azure.com","vendor_version":"v1.16.0"}
I0928 22:41:33.262309   20049 utils.go:75] GRPC call: /csi.v1.Identity/GetPluginInfo
I0928 22:41:33.262333   20049 utils.go:76] GRPC request: {}
I0928 22:41:33.262375   20049 utils.go:82] GRPC response: {"name":"blob.csi.azure.com","vendor_version":"v1.16.0"}
I0928 22:41:33.288327   20049 utils.go:75] GRPC call: /csi.v1.Node/NodeGetInfo
I0928 22:41:33.288349   20049 utils.go:76] GRPC request: {}
I0928 22:41:33.288420   20049 utils.go:82] GRPC response: {"node_id":"aks-nodepool1-63215908-vmss000000"}
I0928 22:41:56.774616   20049 utils.go:75] GRPC call: /csi.v1.Node/NodePublishVolume
I0928 22:41:56.774662   20049 utils.go:76] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/i2kathena-shared-pv/globalmount","target_path":"/var/lib/kubelet/pods/1941ffb4-5bea-468e-b70c-6bca0c533438/volumes/kubernetes.io~csi/i2kathena-shared-pv/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":5}},"volume_context":{"containerName":"shared","csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"i2ksource-664c6f644d-gmbc8","csi.storage.k8s.io/pod.namespace":"default","csi.storage.k8s.io/pod.uid":"1941ffb4-5bea-468e-b70c-6bca0c533438","csi.storage.k8s.io/serviceAccount.name":"default","protocol":"fuse","resourceGroup":"AthenaRG","storageAccount":"i2kathena"},"volume_id":"athena-shared-fuse-vh"}
I0928 22:41:56.775080   20049 utils.go:75] GRPC call: /csi.v1.Node/NodePublishVolume
I0928 22:41:56.775100   20049 utils.go:76] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/i2kathena-corpus-pv/globalmount","target_path":"/var/lib/kubelet/pods/09057819-7c56-4a67-a6f9-acfc29c42dcf/volumes/kubernetes.io~csi/i2kathena-corpus-pv/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":3}},"volume_context":{"containerName":"corpus","csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"i2kweb-5657bf76f4-5clxl","csi.storage.k8s.io/pod.namespace":"default","csi.storage.k8s.io/pod.uid":"09057819-7c56-4a67-a6f9-acfc29c42dcf","csi.storage.k8s.io/serviceAccount.name":"default","protocol":"fuse","resourceGroup":"AthenaRG","storageAccount":"i2kathena"},"volume_id":"athena-corpus-fuse-vh"}
I0928 22:41:56.775377   20049 nodeserver.go:122] NodePublishVolume: volume athena-shared-fuse-vh mounting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/i2kathena-shared-pv/globalmount at /var/lib/kubelet/pods/1941ffb4-5bea-468e-b70c-6bca0c533438/volumes/kubernetes.io~csi/i2kathena-shared-pv/mount with mountOptions: [bind]
I0928 22:41:56.775416   20049 mount_linux.go:183] Mounting cmd (mount) with arguments ( -o bind /var/lib/kubelet/plugins/kubernetes.io/csi/pv/i2kathena-shared-pv/globalmount /var/lib/kubelet/pods/1941ffb4-5bea-468e-b70c-6bca0c533438/volumes/kubernetes.io~csi/i2kathena-shared-pv/mount)
I0928 22:41:56.776497   20049 nodeserver.go:122] NodePublishVolume: volume athena-corpus-fuse-vh mounting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/i2kathena-corpus-pv/globalmount at /var/lib/kubelet/pods/09057819-7c56-4a67-a6f9-acfc29c42dcf/volumes/kubernetes.io~csi/i2kathena-corpus-pv/mount with mountOptions: [bind]
I0928 22:41:56.776574   20049 mount_linux.go:183] Mounting cmd (mount) with arguments ( -o bind /var/lib/kubelet/plugins/kubernetes.io/csi/pv/i2kathena-corpus-pv/globalmount /var/lib/kubelet/pods/09057819-7c56-4a67-a6f9-acfc29c42dcf/volumes/kubernetes.io~csi/i2kathena-corpus-pv/mount)
E0928 22:41:56.777944   20049 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments:  -o bind /var/lib/kubelet/plugins/kubernetes.io/csi/pv/i2kathena-shared-pv/globalmount /var/lib/kubelet/pods/1941ffb4-5bea-468e-b70c-6bca0c533438/volumes/kubernetes.io~csi/i2kathena-shared-pv/mount
Output: mount: /var/lib/kubelet/pods/1941ffb4-5bea-468e-b70c-6bca0c533438/volumes/kubernetes.io~csi/i2kathena-shared-pv/mount: special device /var/lib/kubelet/plugins/kubernetes.io/csi/pv/i2kathena-shared-pv/globalmount does not exist.

E0928 22:41:56.778041   20049 utils.go:80] GRPC error: rpc error: code = Internal desc = Could not mount "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/i2kathena-shared-pv/globalmount" at "/var/lib/kubelet/pods/1941ffb4-5bea-468e-b70c-6bca0c533438/volumes/kubernetes.io~csi/i2kathena-shared-pv/mount": mount failed: exit status 32
Mounting command: mount
Mounting arguments:  -o bind /var/lib/kubelet/plugins/kubernetes.io/csi/pv/i2kathena-shared-pv/globalmount /var/lib/kubelet/pods/1941ffb4-5bea-468e-b70c-6bca0c533438/volumes/kubernetes.io~csi/i2kathena-shared-pv/mount
Output: mount: /var/lib/kubelet/pods/1941ffb4-5bea-468e-b70c-6bca0c533438/volumes/kubernetes.io~csi/i2kathena-shared-pv/mount: special device /var/lib/kubelet/plugins/kubernetes.io/csi/pv/i2kathena-shared-pv/globalmount does not exist.
E0928 22:41:56.778317   20049 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments:  -o bind /var/lib/kubelet/plugins/kubernetes.io/csi/pv/i2kathena-corpus-pv/globalmount /var/lib/kubelet/pods/09057819-7c56-4a67-a6f9-acfc29c42dcf/volumes/kubernetes.io~csi/i2kathena-corpus-pv/mount
Output: mount: /var/lib/kubelet/pods/09057819-7c56-4a67-a6f9-acfc29c42dcf/volumes/kubernetes.io~csi/i2kathena-corpus-pv/mount: special device /var/lib/kubelet/plugins/kubernetes.io/csi/pv/i2kathena-corpus-pv/globalmount does not exist.

E0928 22:41:56.778382   20049 utils.go:80] GRPC error: rpc error: code = Internal desc = Could not mount "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/i2kathena-corpus-pv/globalmount" at "/var/lib/kubelet/pods/09057819-7c56-4a67-a6f9-acfc29c42dcf/volumes/kubernetes.io~csi/i2kathena-corpus-pv/mount": mount failed: exit status 32
Mounting command: mount
Mounting arguments:  -o bind /var/lib/kubelet/plugins/kubernetes.io/csi/pv/i2kathena-corpus-pv/globalmount /var/lib/kubelet/pods/09057819-7c56-4a67-a6f9-acfc29c42dcf/volumes/kubernetes.io~csi/i2kathena-corpus-pv/mount
Output: mount: /var/lib/kubelet/pods/09057819-7c56-4a67-a6f9-acfc29c42dcf/volumes/kubernetes.io~csi/i2kathena-corpus-pv/mount: special device /var/lib/kubelet/plugins/kubernetes.io/csi/pv/i2kathena-corpus-pv/globalmount does not exist.

kubectl get pv:

NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                 STORAGECLASS             REASON   AGE
i2kathena-corpus-pv                        1Gi        ROX            Retain           Bound    default/i2kathena-corpus-pvc          azureblob-fuse-premium            20m
i2kathena-shared-pv                        10Gi       RWX            Retain           Bound    default/i2kathena-shared-pvc          azureblob-fuse-premium            20m

kubectl get pvc:

NAME                          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS             AGE
i2kathena-corpus-pvc          Bound    i2kathena-corpus-pv                        1Gi        ROX            azureblob-fuse-premium   20m
i2kathena-shared-pvc          Bound    i2kathena-shared-pv                        10Gi       RWX            azureblob-fuse-premium   20m

Sample pod description:

  Warning  FailedMount  40s (x9 over 18m)  kubelet  MountVolume.SetUp failed for volume "i2kathena-corpus-pv" : rpc error: code = Internal desc = Could not mount "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/i2kathena-corpus-pv/globalmount" at "/var/lib/kubelet/pods/09057819-7c56-4a67-a6f9-acfc29c42dcf/volumes/kubernetes.io~csi/i2kathena-corpus-pv/mount": mount failed: exit status 32
Mounting command: mount
Mounting arguments:  -o bind /var/lib/kubelet/plugins/kubernetes.io/csi/pv/i2kathena-corpus-pv/globalmount /var/lib/kubelet/pods/09057819-7c56-4a67-a6f9-acfc29c42dcf/volumes/kubernetes.io~csi/i2kathena-corpus-pv/mount
Output: mount: /var/lib/kubelet/pods/09057819-7c56-4a67-a6f9-acfc29c42dcf/volumes/kubernetes.io~csi/i2kathena-corpus-pv/mount: special device /var/lib/kubelet/plugins/kubernetes.io/csi/pv/i2kathena-corpus-pv/globalmount does not exist.

I was able to only partially work around the issue by using kubectl exec -n kube-system csi-blob-node-6clgc -c blob -- /bin/sh and then manually creating the missing pv-name/globalmount directories and setting their owner id to 1001, which is the UID under which my pods are running. (Not sure this latter is necessary.).

This got the pod started. However, when I look at the directories in my pods into which the PVCs are mounted, they're empty, even though the container named by the volumeAttributes.containerName field in the CSI spec is not empty.

Running the mount command in the CSI driver's blob container doesn't show any of my blob containers mounted. However, I do see lines like this:

/dev/sda1 on /var/lib/kubelet/pods/09057819-7c56-4a67-a6f9-acfc29c42dcf/volumes/kubernetes.io~csi/i2kathena-orpus-pv/mount type ext4 (rw,relatime,discard)

I'm little surprised to see mention of /dev/sda1 at the mount point that should be a blob file, but I am not familiar with how a FUSE file system appears in mtab.

andyzhangx commented 2 years ago

do you have two PVs with same volumeHandle value?

ejschoen commented 2 years ago

Not concurrently, but I have deleted and recreated PVs reusing the same volume handles.

— Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/blob-csi-driver/issues/762#issuecomment-1261826725, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACQ5IYRIWDO3K2CUFM24WI3WAUZQ3ANCNFSM6AAAAAAQYHL7PU . You are receiving this because you authored the thread.Message ID: @.***>

andyzhangx commented 2 years ago

pls don't use same volumeHandle value, if orignial PV is still used, and second PV with same volumeHandle value would not be mounted on the same node

ejschoen commented 2 years ago

Changing the volume handles did seem to solve the problem, even though I was not using the same volumeHandle twice at the same time.

Does this mean that if I deploy an application through a Helm chart, and that chart creates CSI-implemented PVs, PVCs, and pods that use those PVCs, then I have to generate new unique volumeHandles for each PV with each Helm chart deployment or upgrade?

andyzhangx commented 2 years ago

if PV is created by the driver, then volumeHandle would be unique, the original problem is that the self configured PV is still mounted on the node, and there is conflict with new PV with same volumeHandle.

ejschoen commented 2 years ago

Thanks. I think I understand. Since PersistentVolumes are immutable, if I want to change something about the volume when redeploying a Helm chart, I have to create a new PV name and make sure it doesn't use the same volumeHandle as a previous PersistentVolume that the Helm chart would eventually cause to be destroyed (because there would no longer be any Pods mounting PVCs referring to the old PVs).

If I were to not use a different volumeHandle, I understand that during the termination grace period for the pods, there would be duplicate volumeHandles... the old-not-yet-destroyed PV and new PV sharing the same volumeHandle. But after the old PVs get destoyed by Kubernetes when they're no longer referenced, there is no longer volumeHandle duplication. But by then, is it too late? Does the driver not do anything that would detect that the volumeHandle is no longer ambiguous?

Just so I understand your explanation above, the help pages that I am reading (for example, https://learn.microsoft.com/en-us/azure/aks/azure-csi-blob-storage-static?tabs=secret) distinguish between static and dynamic blobs. Is this is the same distinction, since in the dynamic case, the application doesn't create its own PersistentVolume?

andyzhangx commented 2 years ago

it's related to this upstream issue: https://github.com/kubernetes/kubernetes/issues/91556, volumeHandle is the unique ID detected by kubelet when mount, so it's by design in k8s now.

for dynamic case, the volumeHandle is a uuid, so it's always unique, while if you use static case, you need to make sure the volumeHandle is unique.

ejschoen commented 2 years ago

That's too bad. It's very un-k8s like. Kubelet configures the system of pods and resources according to current state, but here it looks like a temporarily-duplicated volumeHandle that quickly becomes not duplicated causes a discrepancy between the desired and actual state that Kubelet or the CSI driver never resolves.

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 1 year ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/blob-csi-driver/issues/762#issuecomment-1446611059): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.