kubernetes-retired / service-catalog

Consume services in Kubernetes using the Open Service Broker API
https://svc-cat.io
Apache License 2.0
1.05k stars 385 forks source link

Service Instances cascading delete proposal #2734

Closed mszostok closed 2 years ago

mszostok commented 5 years ago

Service Instances cascading delete proposal

This is the umbrella issue for the Service Instances cascading delete operation. In this issue, we also track the implementation of this functionality from alpha to GA stage.

Motivation

Currently, during the deprovisioning operation, the default case is to fail the deprovisioning if there are bindings against the instance being deprovisioned. ​ Problems:

source: https://github.com/openservicebrokerapi/servicebroker/blob/v2.15/spec.md#deprovisioning

Discussions

  1. Use the kubectl --cascade flag

    --cascade=true: If true, cascade the deletion of the resources managed by this resource (e.g. Pods created by a ReplicationController). Default true.

    Basically, under the hood, the propagationPolicy is applied on a given resource. In case of cascade=true, it is propagationPolicy=foreground. In case of cascade=false, it is propagationPolicy=background. The propagationPolicy is managed by the Kubernetes Garbage Collection. ​ However, this option cannot be considered because of the expected behavior:

    The ownerReference cannot be used because of a problem when a user sets the --cascade=false option. In such case, Kubernetes requires the propagationPolicy=background.

    In background cascading deletion, Kubernetes deletes the owner object immediately and the garbage collector then deletes the dependents in the background.

    source: https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/#background-cascading-deletion

    In our case, the binding must be deleted before the instance. Implementing the background behavior is not an option in the case of Service Catalog. ​

  2. Global flag for changing the behavior Add the disable-cascading-deletion=<true/false> flag for the Service Catalog controller-manager. This flag globally changes the behavior of the controller-manager.

  3. ServiceInstance spec property changes​ We can add the deleteBehavior field under the Instance spec. The field has two possible behaviors: forceCascade or failOnBindings. In the former case, deleting an instance force-deletes all Bindings that reference the instance, without confirming with the person who deletes it. In the latter one, the deletion simply fails if there are bindings that reference the instance. *This field will be optional and it will default to failOnBindings. ​

  4. Always perform cascade deletion When a user deletes a ServiceInstance, the related ServiceBindings are deleted (execute the unbind call to the broker). When all ServiceBindings are deleted successfully, the ServiceInstance is also deleted (execute the deprovision call to the broker). ​

    Accepted solution

    Based on the client feedback and OSB API specification, we know that we want to implement this behavior and to do it safely we decided to go with the 2nd option: Global flag for changing the behavior. ​ Reasons:

  5. OSB API Spec:

    Platforms MUST delete all Service Bindings for a Service Instance prior to attempting to deprovision the Service Instance. This specification does not specify what a Service Broker is to do if it receives a deprovision request while there are still Service Bindings associated with it.

    source: https://github.com/openservicebrokerapi/servicebroker/blob/v2.15/spec.md#deprovisioning

  6. If Users expect that deleting a Kubernetes API resource results in the total deletion of resources associated with the API resource then they can set the disable-cascading-deletion flag to true and controller-manager will take care about that.

This solution was accepted and the alpha implementation has already been added in this PR: https://github.com/kubernetes-sigs/service-catalog/pull/2711. Starting from the Service Catalog version 0.3.0, you can enable this option by setting the CascadingDeletion feature gate to true. Users, who do not accept cascading deletion, the controller-manager provides a flag disable-cascading-deletion which blocks the feature even if the cascading deletion is enabled by default (in the future).

ACTION REQUIRED

Going with this behavior is backward compatible. It was implemented as alpha. We are waiting for your feedback and use-cases if this approach happens to break your flow. It's unacceptable for us, so please let us know.

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 4 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

mszostok commented 4 years ago

/remove-lifecycle rotten

mszostok commented 4 years ago

ICYMI:

The Cascading binding deletion feature is officially available in 0.3.0 release. The controller manager deletes all Service Bindings for a Service Instance before attempting to deprovision the Service Instance. This option can be enabled by setting the CascadingDeletion feature gate to true (#2711)

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 4 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 4 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 4 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/service-catalog/issues/2734#issuecomment-714717460): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
jhvhs commented 4 years ago

/remove-lifecycle rotten /reopen /lifecycle frozen

k8s-ci-robot commented 4 years ago

@jhvhs: Reopened this issue.

In response to [this](https://github.com/kubernetes-sigs/service-catalog/issues/2734#issuecomment-716895665): >/remove-lifecycle rotten >/reopen >/lifecycle frozen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
mrbobbytables commented 2 years ago

This project is being archived, closing open issues and PRs. Please see this PR for more information: https://github.com/kubernetes/community/pull/6632