Open josedev-union opened 1 year ago
@josedev-union see a similar issue in https://github.com/Lirt/velero-plugin-for-openstack/issues/78 Do you think a full clone will be a solution to your issue?
@josedev-union see a similar issue in Lirt/velero-plugin-for-openstack#78 Do you think a full clone will be a solution to your issue?
@kayrus we don't have a plan to use velero. Btw, i don't think clone can replace snapshot. Volume clone is only possible for Available(idle) volumes. But in practice, we have to backup applications on-fly without downtime you know.
@josedev-union I'm not asking you to use velero. I'm just sharing an issue that has the same consequences.
Volume clone is only possible for Available(idle) volumes
The volume can be cloned with the in-use
status. If I'm not mistaken, the shadow snapshot with the force: true
is created, then a new volume is restored from it. Can you try with the in-use
volume in your environment openstack volume create --source test-in-use-vol
?
But openstack snapshot api is async and it means it responds 202 (return value 0) to DELETE snapshot request every time even if the deletion failed. i.e. this cs.Cloud.DeleteSnapshot func will never fail in our scenario and k8s VS and VSC(VolumeSnapshotContent) objects are deleted without any issue and it will not be requeued even though openstack resources are there.
I think the key ask here is to wait for snapshot really deleted just like https://github.com/kubernetes/cloud-provider-openstack/blob/9ed6d961c6ee5a4f51533877ae981aa6d9753f2d/pkg/csi/cinder/controllerserver.go#L388 wait for snapshot ready , because the API is 202 response (https://docs.openstack.org/api-ref/block-storage/v3/index.html?expanded=delete-a-snapshot-detail#delete-a-snapshot) , so I think we should may consider wait for snapshot to really delete (with a timeout of course)
@jichenjc if the snapshot is in error state, API won't allow to delete it. The logic must be a bit clever and complicated, e.g. reset snapshot status, then try to delete it. I faced the same issue in velero plugin, testing this fix in my env. If the tests are good, we can onboard this logic into CSI.
that's good suggestion ,will read your provided PR :) thanks for the great info~
@jichenjc if the snapshot is in error state, API won't allow to delete it. The logic must be a bit clever and complicated, e.g. reset snapshot status, then try to delete it. I faced the same issue in velero plugin, testing this fix in my env. If the tests are good, we can onboard this logic into CSI.
I agree on that it will be complicated a bit. But i think we need to keep in mind that reset status requires admin permission. We run cinder-csi-plugin using tenant scope service account with member role. (Downstream clusters will not have such a wide permission normally)
I checked you PR and it will result error for reset status because of perm lack. Ofc, without reset operation, snapshot deletion will fail with other reason like timeout or invalid status if the snapshot is in error_deleting
status for example.
It may seem that only the error message is different, but in fact, this is a change in the role requirements for the plugin.
@josedev-union we have two kinds of permissions in our openstack env: admin and cloud admin. admin permissions allow to reset the status, cloud admin can force delete the resource. if you also have such roles, then the status reset can be triggered. nevertheless this reset option should be toggleable in CSI config.
/assign
admin permissions allow to reset the status, cloud admin can force delete the resource.
https://docs.openstack.org/keystone/latest/admin/service-api-protection.html#admin
so in your context (right side is defintion in above) ?
admin = project admin , cloud admin = system admin
But openstack snapshot api is async and it means it responds 202 (return value 0) to DELETE snapshot request every time even if the deletion failed. i.e. this cs.Cloud.DeleteSnapshot func will never fail in our scenario and k8s VS and VSC(VolumeSnapshotContent) objects are deleted without any issue and it will not be requeued even though openstack resources are there.
I think the key ask here is to wait for snapshot really deleted just like
wait for snapshot ready , because the API is 202 response (https://docs.openstack.org/api-ref/block-storage/v3/index.html?expanded=delete-a-snapshot-detail#delete-a-snapshot) , so I think we should may consider wait for snapshot to really delete (with a timeout of course)
I think in this case we MUST NOT wait in the controller's reconcile loop for the resource to be deleted. Depending on the order of deletion this may take an arbitrarily long time. It may never deleted if the user doesn't also delete the dependent resources. We must exit the reconcile loop and allow the manager to call us again with exponential backoff.
But openstack snapshot api is async and it means it responds 202 (return value 0) to DELETE snapshot request every time even if the deletion failed. i.e. this cs.Cloud.DeleteSnapshot func will never fail in our scenario and k8s VS and VSC(VolumeSnapshotContent) objects are deleted without any issue and it will not be requeued even though openstack resources are there.
I think the key ask here is to wait for snapshot really deleted just like https://github.com/kubernetes/cloud-provider-openstack/blob/9ed6d961c6ee5a4f51533877ae981aa6d9753f2d/pkg/csi/cinder/controllerserver.go#L388
wait for snapshot ready , because the API is 202 response (https://docs.openstack.org/api-ref/block-storage/v3/index.html?expanded=delete-a-snapshot-detail#delete-a-snapshot) , so I think we should may consider wait for snapshot to really delete (with a timeout of course)
I think in this case we MUST NOT wait in the controller's reconcile loop for the resource to be deleted. Depending on the order of deletion this may take an arbitrarily long time. It may never deleted if the user doesn't also delete the dependent resources. We must exit the reconcile loop and allow the manager to call us again with exponential backoff.
yeah, that coulld be also another solution. The primary requirement is to requeue the deletion of the VS until the associated OpenStack resource is successfully removed, either within a specific limit of attempts or until it is successfully deleted.
I just noticed that manila CSI doesn't have a finalizer, and during my tests I had a manila pvc which was successfully deleted from k8s api, but still stuck in openstack.
I think in this case we MUST NOT wait in the controller's reconcile loop for the resource to be deleted. Depending on the order of deletion this may take an arbitrarily long time. It may never deleted if the user doesn't also delete the dependent resources. We must exit the reconcile loop and allow the manager to call us again with exponential backoff.
agree, this might be another solution , seems this is better option but the goal is same, check the deletion instead of take 202 as complete ..
Hi, Is anyone still working on this issue. @kayrus If you haven't started this work, could you leave it to me?
I think that we should introduce finalizer to resolve the issue. When user create a volume with snapshot, cinder-csi-plugin
should add a finalizer to the snapshot. When the last volume created with this snapshot be deleted, cinder-csi-plugin
will remove this finalizer.
Hi, Is anyone still working on this issue. @kayrus If you haven't started this work, could you leave it to me?
I think that we should introduce finalizer to resolve the issue. When user create a volume with snapshot,
cinder-csi-plugin
should add a finalizer to the snapshot. When the last volume created with this snapshot be deleted,cinder-csi-plugin
will remove this finalizer.
Or alternatively each individual volume created from the snapshot would add its own finalizer. That would have the advantage of being its own reference count, and also giving the user a clue as to specifically which volumes are blocking the deletion of the snapshot.
I can't think of a case where I've seen this pattern used before, though. We should take a moment to consider if there's a reason for that.
Hi @mdbooth
We all agree that if a volume created from a snapshot, the snapshot shouldn't be deleted before the volume, right? But, I haven't understood the sentence:
Or alternatively each individual volume created from the snapshot would add its own finalizer.
Your mean that add finalizer
to PersistentVolume
? IMO, Add finalizer
to PersistentVolume
can't prevent the VolumeSnapShot
by deleted before PersistentVolume
. Or, I guess that maybe your mean to add owner references
to PersistentVolume
, right? But, owner references
also can't prevent the VolumeSnapShot
by deleted before PersistentVolume
.
For more informations about finalizer
and owner references
, please refer to the blow links.
[1] https://kubernetes.io/blog/2021/05/14/using-finalizers-to-control-deletion/
[2] https://kubernetes.io/docs/concepts/architecture/garbage-collection/
Hi, Is anyone still working on this issue. @kayrus If you haven't started this work, could you leave it to me?
I think that we should introduce finalizer to resolve the issue. When user create a volume with snapshot,
cinder-csi-plugin
should add a finalizer to the snapshot. When the last volume created with this snapshot be deleted,cinder-csi-plugin
will remove this finalizer.
I was referring to this finalizer ☝️
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/reopen
@stephenfin: Reopened this issue.
/remove-lifecycle rotten
Hey Guys, any updates on this issue?
Is this a BUG REPORT or FEATURE REQUEST?:
What happened: I am using Delete policy for both PV class and VS(VolumeSnapshot) class. We use TrilioVault for Kubernetes for K8s backup tool and run TVK preflight plugin as a part of preflight check of the cluster provisioning. This TVK preflight check plugin creates a PVC and a VS from this source PVC, then create a restore PVC from the snapshot. Once all checks are finished, it deletes all test resources.
In openstack , if a volume was created from a volume snapshot, this snapshot cannot be deleted before its child volume is deleted. So the first attempt to delete VS is failed because it takes a bit time to delete the restored volume. Ofc, the source volume deletion is also failed because it can be deleted once the VS and restored volume are deleted.
But openstack snapshot api is async and it means it responds 202 (return value 0) to DELETE snapshot request every time even if the deletion failed. i.e. this
cs.Cloud.DeleteSnapshot
func will never fail in our scenario and k8s VS and VSC(VolumeSnapshotContent) objects are deleted without any issue and it will not be requeued even though openstack resources are there. https://github.com/kubernetes/cloud-provider-openstack/blob/9ed6d961c6ee5a4f51533877ae981aa6d9753f2d/pkg/csi/cinder/controllerserver.go#L415It results garbage resources(volumes of the source PVC, and volumeSnapshots of the VS) on Openstack cloud.
What you expected to happen: I think this could be considered as Openstack API issue more or less. But i think we can make a workaround at this plugin level. So csi plugin will check the volume snapshot status after requesting the deletion. If the volume snapshot is removed, then it will return OK. If the volume snapshot status is changed to ERROR or its status remains Available (we can set timeout for this check), it will return Error so it can be requeued.
How to reproduce it: It can be reproduced by using these example resources https://github.com/kubernetes/cloud-provider-openstack/tree/master/examples/cinder-csi-plugin/snapshot
Anything else we need to know?:
Environment:
openstack-cloud-controller-manager(or other related binary) version: We just use cinder-csi-plugin. The version list is csiplugin: registry.k8s.io/provider-os/cinder-csi-plugin:v1.25.6 snapshotter: k8s.gcr.io/sig-storage/csi-snapshotter:v6.2.1 provisioner: k8s.gcr.io/sig-storage/csi-provisioner:v3.1.1 atttacher: k8s.gcr.io/sig-storage/csi-attacher:v3.5.1
OpenStack version: ocata, wallaby (i think all version, even antelope will has the same issue)
Others: K8s: 1.25.9