kubernetes / cloud-provider-openstack

Apache License 2.0
621 stars 611 forks source link

[cinder-csi-plugin:bug] Volume Snapshot deletion is not detecting errors #2294

Open josedev-union opened 1 year ago

josedev-union commented 1 year ago

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened: I am using Delete policy for both PV class and VS(VolumeSnapshot) class. We use TrilioVault for Kubernetes for K8s backup tool and run TVK preflight plugin as a part of preflight check of the cluster provisioning. This TVK preflight check plugin creates a PVC and a VS from this source PVC, then create a restore PVC from the snapshot. Once all checks are finished, it deletes all test resources.

In openstack , if a volume was created from a volume snapshot, this snapshot cannot be deleted before its child volume is deleted. So the first attempt to delete VS is failed because it takes a bit time to delete the restored volume. Ofc, the source volume deletion is also failed because it can be deleted once the VS and restored volume are deleted.

But openstack snapshot api is async and it means it responds 202 (return value 0) to DELETE snapshot request every time even if the deletion failed. i.e. this cs.Cloud.DeleteSnapshot func will never fail in our scenario and k8s VS and VSC(VolumeSnapshotContent) objects are deleted without any issue and it will not be requeued even though openstack resources are there. https://github.com/kubernetes/cloud-provider-openstack/blob/9ed6d961c6ee5a4f51533877ae981aa6d9753f2d/pkg/csi/cinder/controllerserver.go#L415

It results garbage resources(volumes of the source PVC, and volumeSnapshots of the VS) on Openstack cloud.

What you expected to happen: I think this could be considered as Openstack API issue more or less. But i think we can make a workaround at this plugin level. So csi plugin will check the volume snapshot status after requesting the deletion. If the volume snapshot is removed, then it will return OK. If the volume snapshot status is changed to ERROR or its status remains Available (we can set timeout for this check), it will return Error so it can be requeued.

How to reproduce it: It can be reproduced by using these example resources https://github.com/kubernetes/cloud-provider-openstack/tree/master/examples/cinder-csi-plugin/snapshot

Anything else we need to know?:

Environment:

kayrus commented 1 year ago

@josedev-union see a similar issue in https://github.com/Lirt/velero-plugin-for-openstack/issues/78 Do you think a full clone will be a solution to your issue?

josedev-union commented 1 year ago

@josedev-union see a similar issue in Lirt/velero-plugin-for-openstack#78 Do you think a full clone will be a solution to your issue?

@kayrus we don't have a plan to use velero. Btw, i don't think clone can replace snapshot. Volume clone is only possible for Available(idle) volumes. But in practice, we have to backup applications on-fly without downtime you know.

kayrus commented 1 year ago

@josedev-union I'm not asking you to use velero. I'm just sharing an issue that has the same consequences.

Volume clone is only possible for Available(idle) volumes

The volume can be cloned with the in-use status. If I'm not mistaken, the shadow snapshot with the force: true is created, then a new volume is restored from it. Can you try with the in-use volume in your environment openstack volume create --source test-in-use-vol?

jichenjc commented 1 year ago

But openstack snapshot api is async and it means it responds 202 (return value 0) to DELETE snapshot request every time even if the deletion failed. i.e. this cs.Cloud.DeleteSnapshot func will never fail in our scenario and k8s VS and VSC(VolumeSnapshotContent) objects are deleted without any issue and it will not be requeued even though openstack resources are there.

I think the key ask here is to wait for snapshot really deleted just like https://github.com/kubernetes/cloud-provider-openstack/blob/9ed6d961c6ee5a4f51533877ae981aa6d9753f2d/pkg/csi/cinder/controllerserver.go#L388 wait for snapshot ready , because the API is 202 response (https://docs.openstack.org/api-ref/block-storage/v3/index.html?expanded=delete-a-snapshot-detail#delete-a-snapshot) , so I think we should may consider wait for snapshot to really delete (with a timeout of course)

kayrus commented 1 year ago

@jichenjc if the snapshot is in error state, API won't allow to delete it. The logic must be a bit clever and complicated, e.g. reset snapshot status, then try to delete it. I faced the same issue in velero plugin, testing this fix in my env. If the tests are good, we can onboard this logic into CSI.

jichenjc commented 1 year ago

that's good suggestion ,will read your provided PR :) thanks for the great info~

josedev-union commented 1 year ago

@jichenjc if the snapshot is in error state, API won't allow to delete it. The logic must be a bit clever and complicated, e.g. reset snapshot status, then try to delete it. I faced the same issue in velero plugin, testing this fix in my env. If the tests are good, we can onboard this logic into CSI.

I agree on that it will be complicated a bit. But i think we need to keep in mind that reset status requires admin permission. We run cinder-csi-plugin using tenant scope service account with member role. (Downstream clusters will not have such a wide permission normally) I checked you PR and it will result error for reset status because of perm lack. Ofc, without reset operation, snapshot deletion will fail with other reason like timeout or invalid status if the snapshot is in error_deleting status for example. It may seem that only the error message is different, but in fact, this is a change in the role requirements for the plugin.

kayrus commented 1 year ago

@josedev-union we have two kinds of permissions in our openstack env: admin and cloud admin. admin permissions allow to reset the status, cloud admin can force delete the resource. if you also have such roles, then the status reset can be triggered. nevertheless this reset option should be toggleable in CSI config.

kayrus commented 1 year ago

/assign

jichenjc commented 1 year ago

admin permissions allow to reset the status, cloud admin can force delete the resource.

https://docs.openstack.org/keystone/latest/admin/service-api-protection.html#admin

so in your context (right side is defintion in above) ?

admin = project admin , cloud  admin = system admin 
mdbooth commented 1 year ago

But openstack snapshot api is async and it means it responds 202 (return value 0) to DELETE snapshot request every time even if the deletion failed. i.e. this cs.Cloud.DeleteSnapshot func will never fail in our scenario and k8s VS and VSC(VolumeSnapshotContent) objects are deleted without any issue and it will not be requeued even though openstack resources are there.

I think the key ask here is to wait for snapshot really deleted just like

https://github.com/kubernetes/cloud-provider-openstack/blob/9ed6d961c6ee5a4f51533877ae981aa6d9753f2d/pkg/csi/cinder/controllerserver.go#L388

wait for snapshot ready , because the API is 202 response (https://docs.openstack.org/api-ref/block-storage/v3/index.html?expanded=delete-a-snapshot-detail#delete-a-snapshot) , so I think we should may consider wait for snapshot to really delete (with a timeout of course)

I think in this case we MUST NOT wait in the controller's reconcile loop for the resource to be deleted. Depending on the order of deletion this may take an arbitrarily long time. It may never deleted if the user doesn't also delete the dependent resources. We must exit the reconcile loop and allow the manager to call us again with exponential backoff.

josedev-union commented 1 year ago

But openstack snapshot api is async and it means it responds 202 (return value 0) to DELETE snapshot request every time even if the deletion failed. i.e. this cs.Cloud.DeleteSnapshot func will never fail in our scenario and k8s VS and VSC(VolumeSnapshotContent) objects are deleted without any issue and it will not be requeued even though openstack resources are there.

I think the key ask here is to wait for snapshot really deleted just like https://github.com/kubernetes/cloud-provider-openstack/blob/9ed6d961c6ee5a4f51533877ae981aa6d9753f2d/pkg/csi/cinder/controllerserver.go#L388

wait for snapshot ready , because the API is 202 response (https://docs.openstack.org/api-ref/block-storage/v3/index.html?expanded=delete-a-snapshot-detail#delete-a-snapshot) , so I think we should may consider wait for snapshot to really delete (with a timeout of course)

I think in this case we MUST NOT wait in the controller's reconcile loop for the resource to be deleted. Depending on the order of deletion this may take an arbitrarily long time. It may never deleted if the user doesn't also delete the dependent resources. We must exit the reconcile loop and allow the manager to call us again with exponential backoff.

yeah, that coulld be also another solution. The primary requirement is to requeue the deletion of the VS until the associated OpenStack resource is successfully removed, either within a specific limit of attempts or until it is successfully deleted.

kayrus commented 1 year ago

I just noticed that manila CSI doesn't have a finalizer, and during my tests I had a manila pvc which was successfully deleted from k8s api, but still stuck in openstack.

jichenjc commented 1 year ago

I think in this case we MUST NOT wait in the controller's reconcile loop for the resource to be deleted. Depending on the order of deletion this may take an arbitrarily long time. It may never deleted if the user doesn't also delete the dependent resources. We must exit the reconcile loop and allow the manager to call us again with exponential backoff.

agree, this might be another solution , seems this is better option but the goal is same, check the deletion instead of take 202 as complete ..

jeffyjf commented 1 year ago

Hi, Is anyone still working on this issue. @kayrus If you haven't started this work, could you leave it to me?

I think that we should introduce finalizer to resolve the issue. When user create a volume with snapshot, cinder-csi-plugin should add a finalizer to the snapshot. When the last volume created with this snapshot be deleted, cinder-csi-plugin will remove this finalizer.

mdbooth commented 1 year ago

Hi, Is anyone still working on this issue. @kayrus If you haven't started this work, could you leave it to me?

I think that we should introduce finalizer to resolve the issue. When user create a volume with snapshot, cinder-csi-plugin should add a finalizer to the snapshot. When the last volume created with this snapshot be deleted, cinder-csi-plugin will remove this finalizer.

Or alternatively each individual volume created from the snapshot would add its own finalizer. That would have the advantage of being its own reference count, and also giving the user a clue as to specifically which volumes are blocking the deletion of the snapshot.

I can't think of a case where I've seen this pattern used before, though. We should take a moment to consider if there's a reason for that.

jeffyjf commented 1 year ago

Hi @mdbooth

We all agree that if a volume created from a snapshot, the snapshot shouldn't be deleted before the volume, right? But, I haven't understood the sentence:

Or alternatively each individual volume created from the snapshot would add its own finalizer.

Your mean that add finalizer to PersistentVolume? IMO, Add finalizer to PersistentVolume can't prevent the VolumeSnapShot by deleted before PersistentVolume. Or, I guess that maybe your mean to add owner references to PersistentVolume, right? But, owner references also can't prevent the VolumeSnapShot by deleted before PersistentVolume.

For more informations about finalizer and owner references, please refer to the blow links. [1] https://kubernetes.io/blog/2021/05/14/using-finalizers-to-control-deletion/ [2] https://kubernetes.io/docs/concepts/architecture/garbage-collection/

mdbooth commented 1 year ago

Hi, Is anyone still working on this issue. @kayrus If you haven't started this work, could you leave it to me?

I think that we should introduce finalizer to resolve the issue. When user create a volume with snapshot, cinder-csi-plugin should add a finalizer to the snapshot. When the last volume created with this snapshot be deleted, cinder-csi-plugin will remove this finalizer.

I was referring to this finalizer ☝️

k8s-triage-robot commented 9 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 8 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

josedev-union commented 8 months ago

/remove-lifecycle rotten

k8s-triage-robot commented 5 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

kayrus commented 5 months ago

/remove-lifecycle rotten

k8s-triage-robot commented 4 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 3 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 3 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/cloud-provider-openstack/issues/2294#issuecomment-2254125315): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
stephenfin commented 1 month ago

/reopen

k8s-ci-robot commented 1 month ago

@stephenfin: Reopened this issue.

In response to [this](https://github.com/kubernetes/cloud-provider-openstack/issues/2294#issuecomment-2388764994): >/reopen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
stephenfin commented 1 month ago

/remove-lifecycle rotten

gcezaralmeida commented 4 days ago

Hey Guys, any updates on this issue?