kubernetes-csi / external-snapshotter

Sidecar container that watches Kubernetes Snapshot CRD objects and triggers CreateSnapshot/DeleteSnapshot against a CSI endpoint.
Apache License 2.0
471 stars 363 forks source link

Use group specific annotation for the group secrets #1048

Closed Madhu-1 closed 5 months ago

Madhu-1 commented 5 months ago

This PR fixes the bug in the secret handling of the volumegroupsnapshot and also uses two new annotations to store the group secret in volumegroupsnapshotcontent object "groupsnapshot.storage.kubernetes.io/deletion-secret-name" and "groupsnapshot.storage.kubernetes.io/deletion-secret-namespace" are the new annotation.

What type of PR is this?

Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespaces from that line:

/kind api-change

/kind bug

/kind cleanup /kind design /kind documentation /kind failing-test /kind feature /kind flake

What this PR does / why we need it:

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

Fix bug in retrieving the group secret from the volumegroupsnapshotcontent object
k8s-ci-robot commented 5 months ago

Hi @Madhu-1. Thanks for your PR.

I'm waiting for a kubernetes-csi member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
Madhu-1 commented 5 months ago

/assign @xing-yang

Madhu-1 commented 5 months ago

Below is the log from the testing

[🎩︎]mrajanna@li-2cfbef4c-22d9-11b2-a85c-a3e4a93c405f cephfs $]kubectl createtl create -f volumegroupsnapshot.yaml 
volumegroupsnapshot.groupsnapshot.storage.k8s.io/my-group-snapshot created
[🎩︎]mrajanna@li-2cfbef4c-22d9-11b2-a85c-a3e4a93c405f cephfs $]kubectl get volumegroupsnapshot
NAME                READYTOUSE   VOLUMEGROUPSNAPSHOTCLASS          VOLUMEGROUPSNAPSHOTCONTENT                              CREATIONTIME   AGE
my-group-snapshot   true         csi-cephfsplugin-groupsnapclass   groupsnapcontent-6356f378-6347-4ec5-8f18-deecb4961dcc   45s            46s
[🎩︎]mrajanna@li-2cfbef4c-22d9-11b2-a85c-a3e4a93c405f cephfs $]kubectl get volumegroupsnapshotclass -oyaml
apiVersion: v1
items:
- apiVersion: groupsnapshot.storage.k8s.io/v1alpha1
  deletionPolicy: Delete
  driver: rook-ceph.cephfs.csi.ceph.com
  kind: VolumeGroupSnapshotClass
  metadata:
    creationTimestamp: "2024-03-28T12:05:00Z"
    generation: 1
    name: csi-cephfsplugin-groupsnapclass
    resourceVersion: "24592"
    uid: da89c0ef-39fb-4322-b695-e4bf40f89a2e
  parameters:
    clusterID: rook-ceph
    csi.storage.k8s.io/group-snapshotter-secret-name: rook-csi-cephfs-provisioner
    csi.storage.k8s.io/group-snapshotter-secret-namespace: rook-ceph
    fsName: myfs
kind: List
metadata:
  resourceVersion: ""
[🎩︎]mrajanna@li-2cfbef4c-22d9-11b2-a85c-a3e4a93c405f cephfs $]kubectl get volumesnapshot
NAME                                                                                           READYTOUSE   SOURCEPVC   SOURCESNAPSHOTCONTENT                                                                             RESTORESIZE   SNAPSHOTCLASS   SNAPSHOTCONTENT                                                                                   CREATIONTIME   AGE
snapshot-ddb89173044c5ccedc1909c90d804f9c40b36b1c99d149ea98222f1e03d2ad36-2024-03-28-1.40.24   true                     snapcontent-ddb89173044c5ccedc1909c90d804f9c40b36b1c99d149ea98222f1e03d2ad36-2024-03-28-1.40.24   1Gi                           snapcontent-ddb89173044c5ccedc1909c90d804f9c40b36b1c99d149ea98222f1e03d2ad36-2024-03-28-1.40.24   58s            58s
[🎩︎]mrajanna@li-2cfbef4c-22d9-11b2-a85c-a3e4a93c405f cephfs $]kubectl get volumegroupsnapshotcontent
NAME                                                    READYTOUSE   DELETIONPOLICY   DRIVER                          VOLUMEGROUPSNAPSHOTCLASS          VOLUMEGROUPSNAPSHOTNAMESPACE   VOLUMEGROUPSNAPSHOT   AGE
groupsnapcontent-6356f378-6347-4ec5-8f18-deecb4961dcc   true         Delete           rook-ceph.cephfs.csi.ceph.com   csi-cephfsplugin-groupsnapclass   default                        my-group-snapshot     65s
[🎩︎]mrajanna@li-2cfbef4c-22d9-11b2-a85c-a3e4a93c405f cephfs $]kubectl get volumegroupsnapshotcontent
NAME                                                    READYTOUSE   DELETIONPOLICY   DRIVER                          VOLUMEGROUPSNAPSHOTCLASS          VOLUMEGROUPSNAPSHOTNAMESPACE   VOLUMEGROUPSNAPSHOT   AGE
groupsnapcontent-6356f378-6347-4ec5-8f18-deecb4961dcc   true         Delete           rook-ceph.cephfs.csi.ceph.com   csi-cephfsplugin-groupsnapclass   default                        my-group-snapshot     81s
[🎩︎]mrajanna@li-2cfbef4c-22d9-11b2-a85c-a3e4a93c405f cephfs $]kubectl get volumegroupsnapshotcontent -oyaml
apiVersion: v1
items:
- apiVersion: groupsnapshot.storage.k8s.io/v1alpha1
  kind: VolumeGroupSnapshotContent
  metadata:
    annotations:
      groupsnapshot.storage.kubernetes.io/deletion-secret-name: rook-csi-cephfs-provisioner
      groupsnapshot.storage.kubernetes.io/deletion-secret-namespace: rook-ceph
    creationTimestamp: "2024-03-28T13:40:23Z"
    finalizers:
    - groupsnapshot.storage.kubernetes.io/volumegroupsnapshotcontent-bound-protection
    generation: 1
    name: groupsnapcontent-6356f378-6347-4ec5-8f18-deecb4961dcc
    resourceVersion: "33924"
    uid: dd36873b-7d52-41a2-a175-5d2659fcc2bc
  spec:
    deletionPolicy: Delete
    driver: rook-ceph.cephfs.csi.ceph.com
    source:
      volumeHandles:
      - 0001-0009-rook-ceph-0000000000000001-e869764e-d890-407b-8ad8-4b6e4ed91373
    volumeGroupSnapshotClassName: csi-cephfsplugin-groupsnapclass
    volumeGroupSnapshotRef:
      apiVersion: groupsnapshot.storage.k8s.io/v1alpha1
      kind: VolumeGroupSnapshot
      name: my-group-snapshot
      namespace: default
      resourceVersion: "33901"
      uid: 6356f378-6347-4ec5-8f18-deecb4961dcc
  status:
    creationTime: 1711633224180366514
    readyToUse: true
    volumeGroupSnapshotHandle: 0001-0009-rook-ceph-0000000000000001-5b10f444-8e0e-4e37-93ae-796f3f66b0be
    volumeSnapshotContentRefList:
    - kind: VolumeSnapshotContent
      name: snapcontent-ddb89173044c5ccedc1909c90d804f9c40b36b1c99d149ea98222f1e03d2ad36-2024-03-28-1.40.24
kind: List
metadata:
  resourceVersion: ""
[🎩︎]mrajanna@li-2cfbef4c-22d9-11b2-a85c-a3e4a93c405f cephfs $]kubectl delete -f volumegroupsnapshot.yaml 
volumegroupsnapshot.groupsnapshot.storage.k8s.io "my-group-snapshot" deleted
[🎩︎]mrajanna@li-2cfbef4c-22d9-11b2-a85c-a3e4a93c405f cephfs $]kubectl get volumegroupsnapshot,volumegroupsnapshotcontent,volumesnapshot,volumesnapshotcontent
No resources found
xing-yang commented 5 months ago

/ok-to-test

Madhu-1 commented 5 months ago

@xing-yang Can you please review this PR?

nixpanic commented 5 months ago

/lgtm

xing-yang commented 5 months ago

Can you squash the commits?

Madhu-1 commented 5 months ago

Can you squash the commits?

@xing-yang squashed the commits, PTAL

xing-yang commented 5 months ago

/lgtm /approve

k8s-ci-robot commented 5 months ago

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: Madhu-1, xing-yang

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files: - ~~[OWNERS](https://github.com/kubernetes-csi/external-snapshotter/blob/master/OWNERS)~~ [xing-yang] Approvers can indicate their approval by writing `/approve` in a comment Approvers can cancel approval by writing `/approve cancel` in a comment