Open wywself opened 1 week ago
I upgraded cdi to version 1.60.3 and encountered the same problem. Who can help me? Thank you.
# kubectl get dv -n wyw-test-dv | grep -v Succeeded
NAME PHASE PROGRESS RESTARTS AGE
1c7ld CSICloneInProgress N/A 4m39s
# kubectl get pvc | grep tmp
tmp-pvc-1577eabb-fad2-4728-9f0f-bedec895517c Lost pvc-55c432cf-30b8-4c68-a3f2-22ef49ef32b3 0 ceph-block 4m43s
# kubectl describe dv -n wyw-test-dv 1c7ld
Name: 1c7ld
Namespace: wyw-test-dv
Labels: <none>
Annotations: cdi.kubevirt.io/cloneType: csi-clone
cdi.kubevirt.io/storage.clone.token:
eyJhbGciOiJQUzI1NiJ9.eyJleHAiOjE3MzAyODE2MzksImlhdCI6MTczMDI4MTMzOSwiaXNzIjoiY2RpLWFwaXNlcnZlciIsIm5hbWUiOiJpbWctZjlqc242ZXMtY2VwaC1ibG9ja...
cdi.kubevirt.io/storage.extended.clone.token:
eyJhbGciOiJQUzI1NiJ9.eyJleHAiOjIwNDU2NDEzNDIsImlhdCI6MTczMDI4MTM0MiwiaXNzIjoiY2RpLWRlcGxveW1lbnQiLCJuYW1lIjoiaW1nLWY5anNuNmVzLWNlcGgtYmxvY...
cdi.kubevirt.io/storage.usePopulator: true
API Version: cdi.kubevirt.io/v1beta1
Kind: DataVolume
Metadata:
Creation Timestamp: 2024-10-30T09:42:19Z
Finalizers:
cdi.kubevirt.io/dataVolumeFinalizer
Generation: 1
Resource Version: 3764339
UID: f05e31f0-8fee-487b-930d-036fdda35778
Spec:
Pvc:
Access Modes:
ReadWriteOnce
Resources:
Requests:
Storage: 50Gi
Storage Class Name: ceph-block
Volume Mode: Block
Source:
Pvc:
Name: img-f9jsn6es-ceph-block
Namespace: default
Status:
Claim Name: 1c7ld
Conditions:
Last Heartbeat Time: 2024-10-30T09:42:27Z
Last Transition Time: 2024-10-30T09:42:27Z
Message: PVC 1c7ld Pending
Reason: Pending
Status: False
Type: Bound
Last Heartbeat Time: 2024-10-30T09:42:53Z
Last Transition Time: 2024-10-30T09:42:27Z
Reason: TransferRunning
Status: False
Type: Ready
Last Heartbeat Time: 2024-10-30T09:42:29Z
Last Transition Time: 2024-10-30T09:42:29Z
Reason: Populator is running
Status: True
Type: Running
Phase: CSICloneInProgress
Progress: N/A
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CloneScheduled 5m datavolume-pvc-clone-controller Cloning from default/img-f9jsn6es-ceph-block into wyw-test-dv/1c7ld scheduled
Normal Pending 5m datavolume-pvc-clone-controller PVC 1c7ld Pending
Normal PrepClaimInProgress 4m54s datavolume-pvc-clone-controller Prepping PersistentVolumeClaim for DataVolume wyw-test-dv/1c7ld
Normal RebindInProgress 4m42s datavolume-pvc-clone-controller Rebinding PersistentVolumeClaim for DataVolume wyw-test-dv/1c7ld
Normal CSICloneInProgress 4m34s (x2 over 4m58s) datavolume-pvc-clone-controller CSI Volume clone in progress (for pvc default/img-f9jsn6es-ceph-block)
# kubectl get pvc -n wyw-test-dv 1c7ld -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
cdi.kubevirt.io/clonePhase: CSIClone
cdi.kubevirt.io/cloneType: csi-clone
cdi.kubevirt.io/createdForDataVolume: f05e31f0-8fee-487b-930d-036fdda35778
cdi.kubevirt.io/dataSourceNamespace: default
cdi.kubevirt.io/storage.clone.token: eyJhbGciOiJQUzI1NiJ9.eyJleHAiOjE3MzAyODE2MzksImlhdCI6MTczMDI4MTMzOSwiaXNzIjoiY2RpLWFwaXNlcnZlciIsIm5hbWUiOiJpbWctZjlqc242ZXMtY2VwaC1ibG9jayIsIm5hbWVzcGFjZSI6ImRlZmF1bHQiLCJuYmYiOjE3MzAyODEzMzksIm9wZXJhdGlvbiI6IkNsb25lIiwicGFyYW1zIjp7InRhcmdldE5hbWUiOiIxYzdsZCIsInRhcmdldE5hbWVzcGFjZSI6Ind5dy10ZXN0LWR2In0sInJlc291cmNlIjp7Imdyb3VwIjoiIiwicmVzb3VyY2UiOiJwZXJzaXN0ZW50dm9sdW1lY2xhaW1zIiwidmVyc2lvbiI6InYxIn19.G8ed3fmjdTnUJG2jAoKabaZ6DuQ1fumBeTo7zcZRn7xqpxoljYwxkGmTHEoqzrqb0qTih1YmE9rfIipx9TFDs09KtpSISkcCg_KL6is844r7XSmC49QQYsz0RYKjwi0tw2fwGRyfK3eAZeWGSKnYf3omRS2gFpL16BotR6QV3olKExDIg--qKCjsWVWNVoLqzuUPvOx9yFGTo8Qp6ICtlE7FAkN7NKhs5hbTMEVaapKfFLK2niw7AgGxTbAKhlkhwJwyl4dIwwateKP_lTtaaa5F44DYxvNHhu99WCuShwObRhGk6sMPOrU7WKMhcessdHyGe_G_L11sJnwIaLJY7g
cdi.kubevirt.io/storage.condition.running: "true"
cdi.kubevirt.io/storage.condition.running.message: ""
cdi.kubevirt.io/storage.condition.running.reason: Populator is running
cdi.kubevirt.io/storage.contentType: kubevirt
cdi.kubevirt.io/storage.extended.clone.token: eyJhbGciOiJQUzI1NiJ9.eyJleHAiOjIwNDU2NDEzNDgsImlhdCI6MTczMDI4MTM0OCwiaXNzIjoiY2RpLWRlcGxveW1lbnQiLCJuYW1lIjoiaW1nLWY5anNuNmVzLWNlcGgtYmxvY2siLCJuYW1lc3BhY2UiOiJkZWZhdWx0IiwibmJmIjoxNzMwMjgxMzQ4LCJvcGVyYXRpb24iOiJDbG9uZSIsInBhcmFtcyI6eyJ0YXJnZXROYW1lIjoiMWM3bGQiLCJ0YXJnZXROYW1lc3BhY2UiOiJ3eXctdGVzdC1kdiIsInVpZCI6IjE1NzdlYWJiLWZhZDItNDcyOC05ZjBmLWJlZGVjODk1NTE3YyJ9LCJyZXNvdXJjZSI6eyJncm91cCI6IiIsInJlc291cmNlIjoicGVyc2lzdGVudHZvbHVtZWNsYWltcyIsInZlcnNpb24iOiJ2MSJ9fQ.D6eq2K1PkvFIZV4VQecnEwojiGu9E8PUa74_K4mq7smrTet4zuwDXHMXhsH4Y2eycfv_xzIpXo5wNB8vJw0uMhL9N6kNqb1NmGNB0vVmIjMCMDLHCSYbmXlHg8dWiBZp4V9suhui_3Wp8D7ZSdv0bim1U_usPAP853wT9DR5wlJXa9XpKtduKSTvaAR9lVFybaIG9Aa0OTiLHNKzIaaReOlBotSMbfSUdbppIbotooczzlUmbM-YZIJGeJ_eg4ZsAic6qD2r5EIhOSaw9lRl0iwPRU2vUjSGTXNC3NGqSPfbAf89UxPhHK-v90wM8rTmlFaNUM8h1fjZXP_EqzfX_g
cdi.kubevirt.io/storage.pod.restarts: "0"
cdi.kubevirt.io/storage.preallocation.requested: "false"
cdi.kubevirt.io/storage.usePopulator: "true"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"cdi.kubevirt.io/v1beta1","kind":"DataVolume","metadata":{"annotations":{},"name":"1c7ld","namespace":"wyw-test-dv"},"spec":{"pvc":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"50Gi"}},"storageClassName":"ceph-block","volumeMode":"Block"},"source":{"pvc":{"name":"img-f9jsn6es-ceph-block","namespace":"default"}}}}
volume.beta.kubernetes.io/storage-provisioner: rook-ceph.rbd.csi.ceph.com
volume.kubernetes.io/storage-provisioner: rook-ceph.rbd.csi.ceph.com
creationTimestamp: "2024-10-30T09:42:27Z"
finalizers:
- kubernetes.io/pvc-protection
- cdi.kubevirt.io/clonePopulator
labels:
app: containerized-data-importer
app.kubernetes.io/component: storage
app.kubernetes.io/managed-by: cdi-controller
name: 1c7ld
namespace: wyw-test-dv
ownerReferences:
- apiVersion: cdi.kubevirt.io/v1beta1
blockOwnerDeletion: true
controller: true
kind: DataVolume
name: 1c7ld
uid: f05e31f0-8fee-487b-930d-036fdda35778
resourceVersion: "3764335"
uid: 1577eabb-fad2-4728-9f0f-bedec895517c
spec:
accessModes:
- ReadWriteOnce
dataSource:
apiGroup: cdi.kubevirt.io
kind: VolumeCloneSource
name: volume-clone-source-f05e31f0-8fee-487b-930d-036fdda35778
dataSourceRef:
apiGroup: cdi.kubevirt.io
kind: VolumeCloneSource
name: volume-clone-source-f05e31f0-8fee-487b-930d-036fdda35778
resources:
requests:
storage: 50Gi
storageClassName: ceph-block
volumeMode: Block
status:
phase: Pending
# kubectl get pvc tmp-pvc-1577eabb-fad2-4728-9f0f-bedec895517c
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
tmp-pvc-1577eabb-fad2-4728-9f0f-bedec895517c Lost pvc-55c432cf-30b8-4c68-a3f2-22ef49ef32b3 0 ceph-block 7m46s
[root@rongqi-node01 batch-create]# kubectl get pv pvc-55c432cf-30b8-4c68-a3f2-22ef49ef32b3
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-55c432cf-30b8-4c68-a3f2-22ef49ef32b3 40Gi RWO Retain Bound wyw-test-dv/1c7ld ceph-block 44m
@mhenriks Can you take a look?
Getting similar vibes as this: https://github.com/kubevirt/containerized-data-importer/issues/3259
Please confirm that ceph configuration is correct, StorageClass/cephblockpools/etc
@mhenriks thank you. Here are my configuration, before deleting the dv, I will change the pv's persistentVolumeReclaimPolicy to Delete.Is there something wrong with these configurations?
# kubectl get sc ceph-block -o yaml
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
meta.helm.sh/release-name: os-rook-ceph-cluster
meta.helm.sh/release-namespace: default
storageclass.kubernetes.io/is-default-class: "false"
creationTimestamp: "2024-10-25T16:28:07Z"
labels:
app.kubernetes.io/managed-by: Helm
name: ceph-block
resourceVersion: "2306"
uid: e59a9040-8ec1-402b-bd84-10e4694cb48b
parameters:
clusterID: rook-ceph
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph
csi.storage.k8s.io/fstype: ext4
csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
imageFeatures: layering
imageFormat: "2"
pool: ceph-blockpool
provisioner: rook-ceph.rbd.csi.ceph.com
reclaimPolicy: Retain
volumeBindingMode: Immediate
# kubectl get cephblockpools -n rook-ceph
NAME PHASE
ceph-blockpool Ready
I would make sure that you don't have a bunch of retained PVs that are unneeded and taking up space. Could be causing issues on the backend. Delete reclaimPolicy may keep that from happening
I upgraded cdi to version 1.60.3 and encountered the same problem. Who can help me? Thank you.
# kubectl get dv -n wyw-test-dv | grep -v Succeeded NAME PHASE PROGRESS RESTARTS AGE 1c7ld CSICloneInProgress N/A 4m39s # kubectl get pvc | grep tmp tmp-pvc-1577eabb-fad2-4728-9f0f-bedec895517c Lost pvc-55c432cf-30b8-4c68-a3f2-22ef49ef32b3 0 ceph-block 4m43s # kubectl describe dv -n wyw-test-dv 1c7ld Name: 1c7ld Namespace: wyw-test-dv Labels: <none> Annotations: cdi.kubevirt.io/cloneType: csi-clone cdi.kubevirt.io/storage.clone.token: eyJhbGciOiJQUzI1NiJ9.eyJleHAiOjE3MzAyODE2MzksImlhdCI6MTczMDI4MTMzOSwiaXNzIjoiY2RpLWFwaXNlcnZlciIsIm5hbWUiOiJpbWctZjlqc242ZXMtY2VwaC1ibG9ja... cdi.kubevirt.io/storage.extended.clone.token: eyJhbGciOiJQUzI1NiJ9.eyJleHAiOjIwNDU2NDEzNDIsImlhdCI6MTczMDI4MTM0MiwiaXNzIjoiY2RpLWRlcGxveW1lbnQiLCJuYW1lIjoiaW1nLWY5anNuNmVzLWNlcGgtYmxvY... cdi.kubevirt.io/storage.usePopulator: true API Version: cdi.kubevirt.io/v1beta1 Kind: DataVolume Metadata: Creation Timestamp: 2024-10-30T09:42:19Z Finalizers: cdi.kubevirt.io/dataVolumeFinalizer Generation: 1 Resource Version: 3764339 UID: f05e31f0-8fee-487b-930d-036fdda35778 Spec: Pvc: Access Modes: ReadWriteOnce Resources: Requests: Storage: 50Gi Storage Class Name: ceph-block Volume Mode: Block Source: Pvc: Name: img-f9jsn6es-ceph-block Namespace: default Status: Claim Name: 1c7ld Conditions: Last Heartbeat Time: 2024-10-30T09:42:27Z Last Transition Time: 2024-10-30T09:42:27Z Message: PVC 1c7ld Pending Reason: Pending Status: False Type: Bound Last Heartbeat Time: 2024-10-30T09:42:53Z Last Transition Time: 2024-10-30T09:42:27Z Reason: TransferRunning Status: False Type: Ready Last Heartbeat Time: 2024-10-30T09:42:29Z Last Transition Time: 2024-10-30T09:42:29Z Reason: Populator is running Status: True Type: Running Phase: CSICloneInProgress Progress: N/A Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CloneScheduled 5m datavolume-pvc-clone-controller Cloning from default/img-f9jsn6es-ceph-block into wyw-test-dv/1c7ld scheduled Normal Pending 5m datavolume-pvc-clone-controller PVC 1c7ld Pending Normal PrepClaimInProgress 4m54s datavolume-pvc-clone-controller Prepping PersistentVolumeClaim for DataVolume wyw-test-dv/1c7ld Normal RebindInProgress 4m42s datavolume-pvc-clone-controller Rebinding PersistentVolumeClaim for DataVolume wyw-test-dv/1c7ld Normal CSICloneInProgress 4m34s (x2 over 4m58s) datavolume-pvc-clone-controller CSI Volume clone in progress (for pvc default/img-f9jsn6es-ceph-block) # kubectl get pvc -n wyw-test-dv 1c7ld -o yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: cdi.kubevirt.io/clonePhase: CSIClone cdi.kubevirt.io/cloneType: csi-clone cdi.kubevirt.io/createdForDataVolume: f05e31f0-8fee-487b-930d-036fdda35778 cdi.kubevirt.io/dataSourceNamespace: default cdi.kubevirt.io/storage.clone.token: eyJhbGciOiJQUzI1NiJ9.eyJleHAiOjE3MzAyODE2MzksImlhdCI6MTczMDI4MTMzOSwiaXNzIjoiY2RpLWFwaXNlcnZlciIsIm5hbWUiOiJpbWctZjlqc242ZXMtY2VwaC1ibG9jayIsIm5hbWVzcGFjZSI6ImRlZmF1bHQiLCJuYmYiOjE3MzAyODEzMzksIm9wZXJhdGlvbiI6IkNsb25lIiwicGFyYW1zIjp7InRhcmdldE5hbWUiOiIxYzdsZCIsInRhcmdldE5hbWVzcGFjZSI6Ind5dy10ZXN0LWR2In0sInJlc291cmNlIjp7Imdyb3VwIjoiIiwicmVzb3VyY2UiOiJwZXJzaXN0ZW50dm9sdW1lY2xhaW1zIiwidmVyc2lvbiI6InYxIn19.G8ed3fmjdTnUJG2jAoKabaZ6DuQ1fumBeTo7zcZRn7xqpxoljYwxkGmTHEoqzrqb0qTih1YmE9rfIipx9TFDs09KtpSISkcCg_KL6is844r7XSmC49QQYsz0RYKjwi0tw2fwGRyfK3eAZeWGSKnYf3omRS2gFpL16BotR6QV3olKExDIg--qKCjsWVWNVoLqzuUPvOx9yFGTo8Qp6ICtlE7FAkN7NKhs5hbTMEVaapKfFLK2niw7AgGxTbAKhlkhwJwyl4dIwwateKP_lTtaaa5F44DYxvNHhu99WCuShwObRhGk6sMPOrU7WKMhcessdHyGe_G_L11sJnwIaLJY7g cdi.kubevirt.io/storage.condition.running: "true" cdi.kubevirt.io/storage.condition.running.message: "" cdi.kubevirt.io/storage.condition.running.reason: Populator is running cdi.kubevirt.io/storage.contentType: kubevirt cdi.kubevirt.io/storage.extended.clone.token: eyJhbGciOiJQUzI1NiJ9.eyJleHAiOjIwNDU2NDEzNDgsImlhdCI6MTczMDI4MTM0OCwiaXNzIjoiY2RpLWRlcGxveW1lbnQiLCJuYW1lIjoiaW1nLWY5anNuNmVzLWNlcGgtYmxvY2siLCJuYW1lc3BhY2UiOiJkZWZhdWx0IiwibmJmIjoxNzMwMjgxMzQ4LCJvcGVyYXRpb24iOiJDbG9uZSIsInBhcmFtcyI6eyJ0YXJnZXROYW1lIjoiMWM3bGQiLCJ0YXJnZXROYW1lc3BhY2UiOiJ3eXctdGVzdC1kdiIsInVpZCI6IjE1NzdlYWJiLWZhZDItNDcyOC05ZjBmLWJlZGVjODk1NTE3YyJ9LCJyZXNvdXJjZSI6eyJncm91cCI6IiIsInJlc291cmNlIjoicGVyc2lzdGVudHZvbHVtZWNsYWltcyIsInZlcnNpb24iOiJ2MSJ9fQ.D6eq2K1PkvFIZV4VQecnEwojiGu9E8PUa74_K4mq7smrTet4zuwDXHMXhsH4Y2eycfv_xzIpXo5wNB8vJw0uMhL9N6kNqb1NmGNB0vVmIjMCMDLHCSYbmXlHg8dWiBZp4V9suhui_3Wp8D7ZSdv0bim1U_usPAP853wT9DR5wlJXa9XpKtduKSTvaAR9lVFybaIG9Aa0OTiLHNKzIaaReOlBotSMbfSUdbppIbotooczzlUmbM-YZIJGeJ_eg4ZsAic6qD2r5EIhOSaw9lRl0iwPRU2vUjSGTXNC3NGqSPfbAf89UxPhHK-v90wM8rTmlFaNUM8h1fjZXP_EqzfX_g cdi.kubevirt.io/storage.pod.restarts: "0" cdi.kubevirt.io/storage.preallocation.requested: "false" cdi.kubevirt.io/storage.usePopulator: "true" kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"cdi.kubevirt.io/v1beta1","kind":"DataVolume","metadata":{"annotations":{},"name":"1c7ld","namespace":"wyw-test-dv"},"spec":{"pvc":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"50Gi"}},"storageClassName":"ceph-block","volumeMode":"Block"},"source":{"pvc":{"name":"img-f9jsn6es-ceph-block","namespace":"default"}}}} volume.beta.kubernetes.io/storage-provisioner: rook-ceph.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner: rook-ceph.rbd.csi.ceph.com creationTimestamp: "2024-10-30T09:42:27Z" finalizers: - kubernetes.io/pvc-protection - cdi.kubevirt.io/clonePopulator labels: app: containerized-data-importer app.kubernetes.io/component: storage app.kubernetes.io/managed-by: cdi-controller name: 1c7ld namespace: wyw-test-dv ownerReferences: - apiVersion: cdi.kubevirt.io/v1beta1 blockOwnerDeletion: true controller: true kind: DataVolume name: 1c7ld uid: f05e31f0-8fee-487b-930d-036fdda35778 resourceVersion: "3764335" uid: 1577eabb-fad2-4728-9f0f-bedec895517c spec: accessModes: - ReadWriteOnce dataSource: apiGroup: cdi.kubevirt.io kind: VolumeCloneSource name: volume-clone-source-f05e31f0-8fee-487b-930d-036fdda35778 dataSourceRef: apiGroup: cdi.kubevirt.io kind: VolumeCloneSource name: volume-clone-source-f05e31f0-8fee-487b-930d-036fdda35778 resources: requests: storage: 50Gi storageClassName: ceph-block volumeMode: Block status: phase: Pending # kubectl get pvc tmp-pvc-1577eabb-fad2-4728-9f0f-bedec895517c NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE tmp-pvc-1577eabb-fad2-4728-9f0f-bedec895517c Lost pvc-55c432cf-30b8-4c68-a3f2-22ef49ef32b3 0 ceph-block 7m46s [root@rongqi-node01 batch-create]# kubectl get pv pvc-55c432cf-30b8-4c68-a3f2-22ef49ef32b3 NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-55c432cf-30b8-4c68-a3f2-22ef49ef32b3 40Gi RWO Retain Bound wyw-test-dv/1c7ld ceph-block 44m
@mhenriks Thank you. Before I create dvs, I will delete all the dvs under the wyw-test-dv namespace and dv associated pvs.
According to the above records, by analyzing the dv 1c7ld which has been in the CSICloneInProgress state, it can be found that the pv requested to be bound by tmp-pvc-
Another point is that the phases of dv in the source code are CSIClonePhase, PrepClaimPhase, and RebindPhase, but when describing 1c7ld dv, you can see that the order in Events is CloneScheduled -> Pending -> PrepClaimInProgress-> RebindInProgress -> CSICloneInProgress.
For those dvs that are successfully created, the order in Events is CloneScheduled -> Pending -> CSICloneInProgress->PrepClaimInProgress-> RebindInProgress -> Bound->CloneSucceeded.
@wywself it appears that the PV was updated to refer to the target pvc (1c7ld
). The kube controller manager should set spec.volumeName
of the target PVC to be pvc-55c432cf-30b8-4c68-a3f2-22ef49ef32b3
. Not sure why that's not happening. Maybe check kube controller log and events on PV. CDI waits for target to be bound before deleting the "lost" PVC
What happened: A clear and concise description of what the bug is.
count=0 while [ $count -lt 50 ] do count=
expr $count + 1
length=5 random_string=$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w $length | head -n 1) echo $random_stringsed "s/DV-NAME/$random_string/g" /root/batch-create/batch-dv.yaml > /root/batch-create/batch-dv-1.yaml
kubectl apply -f /root/batch-create/batch-dv-1.yaml
done
kubectl get dv -n wyw-test-dv
lk4w5 RebindInProgress N/A 31m
kubectl get pvc -A | grep -i lost
default tmp-pvc-5ce2f242-2cee-417e-8006-1353c7d1e478 Lost pvc-cf2fe6e7-7911-4889-a9e0-887d824e88fc 0 ceph-block 32m
kubectl get pvc tmp-pvc-5ce2f242-2cee-417e-8006-1353c7d1e478 -o yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: cdi.kubevirt.io/clonePhase: Pending cdi.kubevirt.io/cloneType: csi-clone cdi.kubevirt.io/dataSourceNamespace: default cdi.kubevirt.io/storage.clone.token: eyJhbGciOiJQUzI1NiJ9.eyJleHAiOjE3Mjk2NjU0NzYsImlhdCI6MTcyOTY2NTE3NiwiaXNzIjoiY2RpLWFwaXNlcnZlciIsIm5hbWUiOiJpbWctdXZ0M2N2NmstY2VwaC1ibG9jayIsIm5hbWVzcGFjZSI6ImRlZmF1bHQiLCJuYmYiOjE3Mjk2NjUxNzYsIm9wZXJhdGlvbiI6IkNsb25lIiwicGFyYW1zIjp7InRhcmdldE5hbWUiOiJsazR3NSIsInRhcmdldE5hbWVzcGFjZSI6Ind5dy10ZXN0LWR2In0sInJlc291cmNlIjp7Imdyb3VwIjoiIiwicmVzb3VyY2UiOiJwZXJzaXN0ZW50dm9sdW1lY2xhaW1zIiwidmVyc2lvbiI6InYxIn19.gBjwqBJFGX5L3Oua2MSkqMGZCxD9mVHgl91snaegN_b-xOYw2AdcLtRLkBxZRjQltqecC1LG-1nxafetyayzzBOGpoltQblmrUFc9ziQUJZDwbzfVtOxQVq38ouihOnbrpsxqqVXdnwFtEslfxItu2OJx6H5d55AsNoYSHyL-RoA79Z_MYxlCrXSG3fMpYh2gQiS5EV6rocO-2K2Q_ggCjvJVixkwx3DpxLUSEqkHJ556Pk6vVtPihfY3-hwNxznqx9mlhsNr3avzgk7CzY20h4PVp07e7XXHwMQpl90j1077crFZtA_voZ42IvOblbDABA2SJR6AtVPI3pVjUO2gA cdi.kubevirt.io/storage.contentType: kubevirt cdi.kubevirt.io/storage.pod.restarts: "0" cdi.kubevirt.io/storage.populator.kind: VolumeCloneSource cdi.kubevirt.io/storage.preallocation.requested: "false" cdi.kubevirt.io/storage.usePopulator: "true" kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"cdi.kubevirt.io/v1beta1","kind":"DataVolume","metadata":{"annotations":{},"name":"lk4w5","namespace":"wyw-test-dv"},"spec":{"pvc":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"50Gi"}},"storageClassName":"ceph-block","volumeMode":"Block"},"source":{"pvc":{"name":"img-uvt3cv6k-ceph-block","namespace":"default"}}}} pv.kubernetes.io/bind-completed: "yes" pv.kubernetes.io/bound-by-controller: "yes" volume.beta.kubernetes.io/storage-provisioner: rook-ceph.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner: rook-ceph.rbd.csi.ceph.com creationTimestamp: "2024-10-23T06:33:02Z" finalizers:
kubectl get pv pvc-cf2fe6e7-7911-4889-a9e0-887d824e88fc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-cf2fe6e7-7911-4889-a9e0-887d824e88fc 40Gi RWO Retain Bound wyw-test-dv/lk4w5 ceph-block 34m
kubectl get pv pvc-cf2fe6e7-7911-4889-a9e0-887d824e88fc -o yaml
apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: rook-ceph.rbd.csi.ceph.com volume.kubernetes.io/provisioner-deletion-secret-name: rook-csi-rbd-provisioner volume.kubernetes.io/provisioner-deletion-secret-namespace: rook-ceph creationTimestamp: "2024-10-23T06:33:19Z" finalizers:
What you expected to happen: all dvs are success.
Environment:
kubectl get deployments cdi-deployment -o yaml
): 1.58.1kubectl version
): 1.27.6