kubevirt / containerized-data-importer

Data Import Service for kubernetes, designed with kubevirt in mind.
Apache License 2.0
408 stars 257 forks source link

cdi Label matching bound pv does not take effect #3369

Closed zhangmingxian closed 1 week ago

zhangmingxian commented 1 month ago

What happened: cdi specifies that the label matching binding pv is not valid. The matchlabels in pv and cdi are the same.

What you expected to happen: It is hoped that the cdi label can match the pv normally, so that the special node is scheduled

The following are the cdi and pv contents

apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: local.csi.openebs.io
  name: pvc-34fd945a-2634-4ffb-a7d4-94a9ac631234
  labels:
    openebs.io/nodename: cmcc-test-worker-01
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 60Gi
  csi:
    driver: local.csi.openebs.io
    fsType: xfs
    volumeAttributes:
      openebs.io/cas-type: localpv-lvm
      openebs.io/volgroup: localdatak8s
    volumeHandle: pvc-34fd945a-2634-4ffb-a7d4-94a9ac631234
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: openebs.io/nodename
          operator: In
          values:
          - cmcc-test-worker-01
  persistentVolumeReclaimPolicy: Delete
  storageClassName: openebs-lvmpv
  volumeMode: Filesystem
---
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
  name: "jammy-test3"
  annotations:
    cdi.kubevirt.io/storage.bind.immediate.requested: "true"
spec:
  source:
    http:
      url: "http://cloudimage.test.com:8000/jammy-server-cloudimg-amd64.img"
  pvc:
    resources:
      requests:
        storage: "60Gi"
    storageClassName: "openebs-lvmpv"
    accessModes:
    - ReadWriteOnce
    selector:
      matchLabels:
        openebs.io/nodename: cmcc-test-worker-01

Environment:

akalenyu commented 1 month ago

Sounds like a bug, since I'd expect we create a PVC with that selector and it binds. Could you attach the PVCs that end up being created?

zhangmingxian commented 1 month ago

The newly created datavolume has no label, I don't know why


$ k get pv --show-labels | grep prim
pvc-2217649f-d390-49a9-8c93-a3938a248198   60Gi       RWO            Delete           Bound       default/prime-470ea4d5-7b17-4b89-8901-7582ea2b7c9c-scratch        openebs-lvmpv            7s      <none>
pvc-542e83b9-d024-4bf1-9756-da023d615e8e   60Gi       RWO            Delete           Bound       default/prime-470ea4d5-7b17-4b89-8901-7582ea2b7c9c                openebs-lvmpv            16s     <none>

$ k get pvc --show-labels 
NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    AGE     LABELS
jammy-test1   Bound    pvc-34fd945a-2634-4ffb-a7d4-94a9ac639d0d   50Gi       RWO            openebs-lvmpv   5d1h    alerts.k8s.io/KubePersistentVolumeFillingUp=disabled,app.kubernetes.io/component=storage,app.kubernetes.io/managed-by=cdi-controller,app=containerized-data-importer
jammy-test2   Bound    pvc-bf182bb5-dad2-4776-8eca-c6c9db778d12   30Gi       RWO            openebs-lvmpv   9d      alerts.k8s.io/KubePersistentVolumeFillingUp=disabled,app.kubernetes.io/component=storage,app.kubernetes.io/managed-by=cdi-controller,app=containerized-data-importer
jammy-test3   Bound    pvc-542e83b9-d024-4bf1-9756-da023d615e8e   60Gi       RWO            openebs-lvmpv   7m37s   alerts.k8s.io/KubePersistentVolumeFillingUp=disabled,app.kubernetes.io/component=storage,app.kubernetes.io/managed-by=cdi-controller,app=containerized-data-importer

$ k get pv --show-labels
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                                             STORAGECLASS    REASON   AGE     LABELS
pvc-34fd945a-2634-4ffb-a7d4-94a9ac631234   60Gi       RWO            Delete           Available                                                                     openebs-lvmpv            6m18s   openebs.io/nodename=cmcc-test-worker-01
pvc-34fd945a-2634-4ffb-a7d4-94a9ac639d0d   50Gi       RWO            Delete           Bound       default/jammy-test1                                               openebs-lvmpv            5d1h    <none>
pvc-542e83b9-d024-4bf1-9756-da023d615e8e   60Gi       RWO            Delete           Bound       default/jammy-test3                                               openebs-lvmpv            3m21s   <none>
pvc-6403149c-d3c1-4ab5-9ca9-c7a5a3e82d29   20Gi       RWO            Delete           Bound       kubesphere-monitoring-system/prometheus-k8s-db-prometheus-k8s-1   local                    9d      openebs.io/cas-type=local-hostpath
pvc-b7bf508d-2e8e-4b07-8f2d-ccf11d374d91   20Gi       RWO            Delete           Bound       kubesphere-monitoring-system/prometheus-k8s-db-prometheus-k8s-0   local                    9d      openebs.io/cas-type=local-hostpath
pvc-bf182bb5-dad2-4776-8eca-c6c9db778d12   30Gi       RWO            Delete           Bound       default/jammy-test2                                               openebs-lvmpv            9d      <none>
zhangmingxian commented 1 month ago

Sounds like a bug, since I'd expect we create a PVC with that selector and it binds. Could you attach the PVCs that end up being created?

Could you please take a look? Thank you very much

akalenyu commented 1 month ago

Sounds like a bug, since I'd expect we create a PVC with that selector and it binds. Could you attach the PVCs that end up being created?

Could you please take a look? Thank you very much

Sure take a look here https://github.com/kubevirt/containerized-data-importer/pull/3370

akalenyu commented 1 month ago

So just to summarize the discussion on the PR, this is probably not the proper way to achieve the use case, with some suggestions outlined in the comments of https://github.com/kubevirt/containerized-data-importer/pull/3370.

If that's the case, please let us know if we can close this.

zhangmingxian commented 1 week ago

The root cause is the content of the annotation, so just comment out the annotation