Open Madhu-1 opened 7 months ago
This is also fixed by https://github.com/kubernetes-csi/external-snapshotter/pull/1011 as this updates the volumegroupsnapshotname in the snapshotcontent status
It looks like not fixed yet, reopening
Hello, I'm having some difficulty understanding the detail of this issue. Would this issue present itself by failing to delete a snapshot or would it somehow accidentally delete a snapshot. Would somebody be so kind enough to explain?
Thanks!
Hello, I'm having some difficulty understanding the detail of this issue. Would this issue present itself by failing to delete a snapshot or would it somehow accidentally delete a snapshot. Would somebody be so kind enough to explain?
Thanks!
@jedops the snapshots are deleted internally when the volumegroupsnapshot are deleted, i have provided steps to reproduce and some logs as well, you can see some checks are missing to skip already deleted snapshots or we need reorder the steps on how we delete snapshot which are created as part of volumegroupsnapshot.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Just an update,
I tested the creation of deletion of volumegroupsnapshot with cephfs driver and it seems to work fine. To re-confirm I tried it again
yatipadia:ceph-csi$ kubectl get volumegroupsnapshot
NAME READYTOUSE VOLUMEGROUPSNAPSHOTCLASS VOLUMEGROUPSNAPSHOTCONTENT CREATIONTIME AGE
new-groupsnapshot-demo-1 true csi-cephfsplugin-groupsnapclass groupsnapcontent-b8b1c10d-5c07-47c3-bc36-42d4294628e4 5h47m 5h47m
yatipadia:ceph-csi$ kubectl delete volumegroupsnapshot new-groupsnapshot-demo-1
volumegroupsnapshot.groupsnapshot.storage.k8s.io "new-groupsnapshot-demo-1" deleted
Just an update, I tried out the same with 10-11 pvcs, the volumegroupsnapshot was successfully deleted.
yatipadia:Documents$ kubectl get volumesnapshotcontent
NAME READYTOUSE RESTORESIZE DELETIONPOLICY DRIVER VOLUMESNAPSHOTCLASS VOLUMESNAPSHOT VOLUMESNAPSHOTNAMESPACE AGE
snapcontent-114d4ee02d9142894694e5f0d923333c1c840ec22baccf06bdef58d2d66bc1e1-2024-08-30-5.33.1 true 3221225472 Delete rook-ceph.cephfs.csi.ceph.com snapshot-114d4ee02d9142894694e5f0d923333c1c840ec22baccf06bdef58d2d66bc1e1-2024-08-30-5.33.1 default 4m58s
snapcontent-1aacbcb8f80a5c913552ca17c2119725081584242e63e3e6a981a5e96ba95e94-2024-08-30-5.33.5 true 2147483648 Delete rook-ceph.cephfs.csi.ceph.com snapshot-1aacbcb8f80a5c913552ca17c2119725081584242e63e3e6a981a5e96ba95e94-2024-08-30-5.33.5 default 4m53s
snapcontent-4a7d3f27fc5655a210c9dd2228c6d9f6db722446334ce9e0108f6f11640ffd75-2024-08-30-5.33.9 true 3221225472 Delete rook-ceph.cephfs.csi.ceph.com snapshot-4a7d3f27fc5655a210c9dd2228c6d9f6db722446334ce9e0108f6f11640ffd75-2024-08-30-5.33.9 default 4m50s
snapcontent-55c4157c9dfeb50d176631205bd87afaa7d60059d64ac26096361f009197063b-2024-08-30-5.33.1 true 1073741824 Delete rook-ceph.cephfs.csi.ceph.com snapshot-55c4157c9dfeb50d176631205bd87afaa7d60059d64ac26096361f009197063b-2024-08-30-5.33.1 default 4m58s
snapcontent-7d10cafc30b34e5adf253ed8b57da6d0b4718fda80a4e4a048d7e584a31e1e2b-2024-08-30-5.33.2 true 2147483648 Delete rook-ceph.cephfs.csi.ceph.com snapshot-7d10cafc30b34e5adf253ed8b57da6d0b4718fda80a4e4a048d7e584a31e1e2b-2024-08-30-5.33.2 default 4m57s
snapcontent-817932568eb28072640eb89ecfca12ab4c4c7503589ba729e91dfc9efa1d50b6-2024-08-30-5.33.10 true 3221225472 Delete rook-ceph.cephfs.csi.ceph.com snapshot-817932568eb28072640eb89ecfca12ab4c4c7503589ba729e91dfc9efa1d50b6-2024-08-30-5.33.10 default 4m49s
snapcontent-8dd24fbc3ffd96b1f879c2e8a92ed15edc364792186dd2740739cec1d7887365-2024-08-30-5.33.8 true 3221225472 Delete rook-ceph.cephfs.csi.ceph.com snapshot-8dd24fbc3ffd96b1f879c2e8a92ed15edc364792186dd2740739cec1d7887365-2024-08-30-5.33.8 default 4m51s
snapcontent-96fa767049e8bc9e62e60af70e373323f5c23fb5aa971224894c67ad98c60e09-2024-08-30-5.33.11 true 3221225472 Delete rook-ceph.cephfs.csi.ceph.com snapshot-96fa767049e8bc9e62e60af70e373323f5c23fb5aa971224894c67ad98c60e09-2024-08-30-5.33.11 default 4m47s
snapcontent-ac67c44cb87ea6968bd6075cc6a48981692fb38cf462c80794073db84a2a590b-2024-08-30-5.33.7 true 3221225472 Delete rook-ceph.cephfs.csi.ceph.com snapshot-ac67c44cb87ea6968bd6075cc6a48981692fb38cf462c80794073db84a2a590b-2024-08-30-5.33.7 default 4m52s
snapcontent-d3fc2e9cdf6662234b820ddeabb8b32792ec17223020ca73da808d7487851779-2024-08-30-5.33.4 true 2147483648 Delete rook-ceph.cephfs.csi.ceph.com snapshot-d3fc2e9cdf6662234b820ddeabb8b32792ec17223020ca73da808d7487851779-2024-08-30-5.33.4 default 4m55s
snapcontent-ef5edb9c4d1603d3a8688d52492cad28c5a6fec4bdb57dffc57f8f6c0e6dfae4-2024-08-30-5.33.3 true 1073741824 Delete rook-ceph.cephfs.csi.ceph.com snapshot-ef5edb9c4d1603d3a8688d52492cad28c5a6fec4bdb57dffc57f8f6c0e6dfae4-2024-08-30-5.33.3 default 4m56s
yatipadia:Documents$
yatipadia:Documents$ kubectl get volumegroupsnapshot
NAME READYTOUSE VOLUMEGROUPSNAPSHOTCLASS VOLUMEGROUPSNAPSHOTCONTENT CREATIONTIME AGE
new-groupsnapshot-demo-1 true csi-cephfsplugin-groupsnapclass groupsnapcontent-a767e7e9-46df-407e-b282-d0263d13e45e 5m8s 5m10s
yatipadia:Documents$ kubectl delete volumegroupsnapshot new-groupsnapshot-demo-1
volumegroupsnapshot.groupsnapshot.storage.k8s.io "new-groupsnapshot-demo-1" deleted
yatipadia:Documents$ kubectl get volumesnapshotcontent
No resources found
yatipadia:Documents$ kubectl get volumegroupsnapshot
No resources found in default namespace.
yatipadia:Documents$ kubectl get volumesnapshot
No resources found in default namespace.
yatipadia:Documents$
cc @Madhu-1
good to hear we dont have this bug anymore, in that case we can close it.
@madhu can you please close this issue as well.
@Madhu-1 can you re-open this issue, we can use the same issue to track the bug
What happened:
The volumegroupsnapshot deletion is kind of stuck because the volumesnapshotcontent are already deleted https://github.com/kubernetes-csi/external-snapshotter/blob/fcf78d3d6964632ed7f8b85aa045d667b1da47d4/pkg/sidecar-controller/groupsnapshot_helper.go#L242-L249
What you expected to happen:
The volumegroupsnapshot deletion should happen
How to reproduce it:
It's sometimes happens not always
Anything else we need to know?:
Environment:
kubectl version
):uname -a
):Logs