Closed zhangmingxian closed 1 week ago
Sounds like a bug, since I'd expect we create a PVC with that selector and it binds. Could you attach the PVCs that end up being created?
The newly created datavolume has no label, I don't know why
$ k get pv --show-labels | grep prim
pvc-2217649f-d390-49a9-8c93-a3938a248198 60Gi RWO Delete Bound default/prime-470ea4d5-7b17-4b89-8901-7582ea2b7c9c-scratch openebs-lvmpv 7s <none>
pvc-542e83b9-d024-4bf1-9756-da023d615e8e 60Gi RWO Delete Bound default/prime-470ea4d5-7b17-4b89-8901-7582ea2b7c9c openebs-lvmpv 16s <none>
$ k get pvc --show-labels
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE LABELS
jammy-test1 Bound pvc-34fd945a-2634-4ffb-a7d4-94a9ac639d0d 50Gi RWO openebs-lvmpv 5d1h alerts.k8s.io/KubePersistentVolumeFillingUp=disabled,app.kubernetes.io/component=storage,app.kubernetes.io/managed-by=cdi-controller,app=containerized-data-importer
jammy-test2 Bound pvc-bf182bb5-dad2-4776-8eca-c6c9db778d12 30Gi RWO openebs-lvmpv 9d alerts.k8s.io/KubePersistentVolumeFillingUp=disabled,app.kubernetes.io/component=storage,app.kubernetes.io/managed-by=cdi-controller,app=containerized-data-importer
jammy-test3 Bound pvc-542e83b9-d024-4bf1-9756-da023d615e8e 60Gi RWO openebs-lvmpv 7m37s alerts.k8s.io/KubePersistentVolumeFillingUp=disabled,app.kubernetes.io/component=storage,app.kubernetes.io/managed-by=cdi-controller,app=containerized-data-importer
$ k get pv --show-labels
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE LABELS
pvc-34fd945a-2634-4ffb-a7d4-94a9ac631234 60Gi RWO Delete Available openebs-lvmpv 6m18s openebs.io/nodename=cmcc-test-worker-01
pvc-34fd945a-2634-4ffb-a7d4-94a9ac639d0d 50Gi RWO Delete Bound default/jammy-test1 openebs-lvmpv 5d1h <none>
pvc-542e83b9-d024-4bf1-9756-da023d615e8e 60Gi RWO Delete Bound default/jammy-test3 openebs-lvmpv 3m21s <none>
pvc-6403149c-d3c1-4ab5-9ca9-c7a5a3e82d29 20Gi RWO Delete Bound kubesphere-monitoring-system/prometheus-k8s-db-prometheus-k8s-1 local 9d openebs.io/cas-type=local-hostpath
pvc-b7bf508d-2e8e-4b07-8f2d-ccf11d374d91 20Gi RWO Delete Bound kubesphere-monitoring-system/prometheus-k8s-db-prometheus-k8s-0 local 9d openebs.io/cas-type=local-hostpath
pvc-bf182bb5-dad2-4776-8eca-c6c9db778d12 30Gi RWO Delete Bound default/jammy-test2 openebs-lvmpv 9d <none>
Sounds like a bug, since I'd expect we create a PVC with that selector and it binds. Could you attach the PVCs that end up being created?
Could you please take a look? Thank you very much
Sounds like a bug, since I'd expect we create a PVC with that selector and it binds. Could you attach the PVCs that end up being created?
Could you please take a look? Thank you very much
Sure take a look here https://github.com/kubevirt/containerized-data-importer/pull/3370
So just to summarize the discussion on the PR, this is probably not the proper way to achieve the use case, with some suggestions outlined in the comments of https://github.com/kubevirt/containerized-data-importer/pull/3370.
If that's the case, please let us know if we can close this.
The root cause is the content of the annotation, so just comment out the annotation
What happened: cdi specifies that the label matching binding pv is not valid. The matchlabels in pv and cdi are the same.
What you expected to happen: It is hoped that the cdi label can match the pv normally, so that the special node is scheduled
The following are the cdi and pv contents
Environment:
kubectl get deployments cdi-deployment -o yaml
): v1.59.0kubectl version
): 1.26.15uname -a
): 5.19.0-50-generic