kubernetes-sigs / gcp-compute-persistent-disk-csi-driver

The Google Compute Engine Persistent Disk (GCE PD) Container Storage Interface (CSI) Storage Plugin.
Apache License 2.0
163 stars 143 forks source link

PersistentVolumeClaim by Name {name}: claim in dataSource not bound or invalid #1651

Closed felipemendes1994 closed 1 month ago

felipemendes1994 commented 6 months ago

Hello,

I've been trying to clone an existing ReadWriteOnce PVC as ReadOnlyMany one, but without success.

Existing Volume description:

Name:          mlp-cache-pvc-20240318
Namespace:     default
StorageClass:  fast
Status:        Bound
Volume:        pvc-6aef82ed-33b1-4527-8367-88d9bcf6c2a6
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
               volume.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      100Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       <none>
Events:        <none>

My Volume Clone yml:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mlp-readonly-cache-pvc-20240327
  # namespace: default ## Commented cause made no difference
spec:
  dataSource:
    name: mlp-cache-pvc-20240318
    kind: PersistentVolumeClaim
  accessModes:
    - ReadOnlyMany
  storageClassName: fast
  resources:
    requests:
      storage: 100Gi

When I apply the volume clone yml, It stuck in pending with the following description:

Name:          mlp-readonly-cache-pvc-20240327
Namespace:     default
StorageClass:  fast
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
               volume.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
DataSource:
  Kind:   PersistentVolumeClaim
  Name:   mlp-cache-pvc-20240318
Used By:  <none>
Events:
  Type     Reason                Age                From                                                                                              Message
  ----     ------                ----               ----                                                                                              -------
  Normal   Provisioning          14s (x7 over 77s)  pd.csi.storage.gke.io_gke-5b7d64186ad785d34b83-17af-79f5-vm_556d6264-d9d7-4ee4-9e50-93f6ff524b04  External provisioner is provisioning volume for claim "default/mlp-readonly-cache-pvc-20240327"
  Warning  ProvisioningFailed    14s (x7 over 77s)  pd.csi.storage.gke.io_gke-5b7d64186ad785d34b83-17af-79f5-vm_556d6264-d9d7-4ee4-9e50-93f6ff524b04  failed to provision volume with StorageClass "fast": error getting handle for DataSource Type PersistentVolumeClaim by Name mlp-cache-pvc-20240318: claim in dataSource not bound or invalid
  Normal   ExternalProvisioning  13s (x7 over 77s)  persistentvolume-controller                                                                       waiting for a volume to be created, either by external provisioner "pd.csi.storage.gke.io" or manually created by system administrator

As you can see I got the error claim in dataSource not bound or invalid. But my data is valid and created and not attached to any pod.

Currently my cluster runs on 1.27.8-gke.1067004 version.

Any help??

k8s-triage-robot commented 3 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 1 month ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/issues/1651#issuecomment-2308914977): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.