kubernetes-sigs / cluster-api

Home for Cluster API, a subproject of sig-cluster-lifecycle
https://cluster-api.sigs.k8s.io
Apache License 2.0
3.52k stars 1.3k forks source link

Allow user to opt-in to Node draining during Cluster delete #9692

Open dlipovetsky opened 11 months ago

dlipovetsky commented 11 months ago

What would you like to be added (User Story)?

As an operator, I would like to opt-in to Node draining during Cluster delete, so that I can delete Clusters managed by Infrastructure Providers that impose additional conditions on deleting underlying VMs, e.g. detaching all secondary storage (which in turns can require all Pods using this storage to be evicted).

Detailed Description

When it reconciles a Machine resource marked to be deleted, the Machine controller attempts to drain the corresponding Node. However, if the Cluster resource is marked for deletion, it does not perform the drain. This behavior was added in https://github.com/kubernetes-sigs/cluster-api/pull/2746 in order to decrease the time it takes to delete a cluster.

As part of reconciline a Machine marked for deletion, the Machine controller marks the referenced InfraMachine resource for deletion. The infrastructure provider's InfraMachine controller reconciles that delete. The InfraMachine controller can refuse to delete the underlying infrastructure until some condition is met. That condition could be that the corresponding Node must be drained.

For example, if I create a cluster with the VMware Cloud Director infrastructure provider (CAPVCD), and use secondary storage ("VCD Named Disk Volumes"), then the delete cannot proceed until I drain all Nodes with Pods that use this storage. Cluster API does not drain the Nodes.

CAPVCD requires the Node corresponding to the InfraMachine be drained. This is because CAPVCD requires all secondary storage to be detached from the VM underlying the InfraMachine. Detaching this storage is the responsibility of the VCD CSI driver, and it refuses to do this until all volumes that use this storage can be unmounted, and in turn that means all Pods using these volumes must be evicted from the Node.

Because Cluster API does not drain Nodes during Cluster delete, the Nodes are not drained, the volumes are not unmounted, and CAPVCD refuses to delete the underlying VMs. The Cluster delete process continues until the the Nodes are drained manually.

Anything else you would like to add?

As @lubronzhan helpfully points out below, draining on Cluster delete is possible by implementing and deploying a PreDrainDeleteHook.

It is my understanding that the Machine controller skips drain on Cluster delete as an optimization, not for correctness. For some infrastructure providers, draining is required for correctness. Given that, I think Cluster API itself should allow users to disable this optimization in favor of correctness, and I think requiring users to implement and deploy a webhook is too high a bar. I would prefer a simpler solution, e.g., an annotation, or an API field.

Label(s) to be applied

/kind feature One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.

k8s-ci-robot commented 11 months ago

This issue is currently awaiting triage.

If CAPI contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
dlipovetsky commented 11 months ago

/cc @erkanerol

dlipovetsky commented 11 months ago

On a related note, there is an ongoing conversation in Kubernetes slack with CAPVCD maintainers about whether the InfraMachine controller should impose the condition described above.

lubronzhan commented 11 months ago

I think it's already doable if you implement the MachineDeletionPhaseHook https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/proposals/20200602-machine-deletion-phase-hooks.md#changes-to-machine-controller

dlipovetsky commented 10 months ago

I think it's already doable if you implement the MachineDeletionPhaseHook https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/proposals/20200602-machine-deletion-phase-hooks.md#changes-to-machine-controller

Thanks for this helpful context! I amended my feature request to explain why I think Cluster API should provide a solution that doesn't require the user to implement and deploy a webhook.

vincepri commented 10 months ago

This seems a reasonable ask, I'd like to see an option in the Cluster object under a field we could call deletionStrategy or similar to avoid adding more annotations

sbueringer commented 10 months ago

Sounds reasonable. Shouldn't be necessary to implement a hook for this case.

dlipovetsky commented 10 months ago

I think there are two obvious deletion strategies. Feel free to suggest others, or alternate names for these.

  1. Graceful. Drain a Node before deleting its corresponding Machine.
  2. Forced. Just delete the Machine.

Today, we support the Forced strategy only.

I think we could support the Graceful strategy, but this support is limited by how drain works. Notably, drain does not evict Pods managed by DaemonSets.

Let's say a CAPVCD cluster, runs a DaemonSet that mounts VCD CSI-backed persistent volumes. If we use the Graceful strategy to delete the cluster, CAPI will drain each Node. However, after a Node is drained, the DaemonSet Pods continue to run, and their volumes remain mounted.

dlipovetsky commented 9 months ago

In summary, I think supporting a Graceful strategy may make sense, but it won't address problem that motivated this idea. I'm fine with waiting to see what the community thinks.

/priority awaiting-more-evidence

fabriziopandini commented 6 months ago

/triage accepted /kind api-change /priority backlog

IMO there are a couple of things to unwind (and eventually split into subtasks), but the discussion is interesting:

fabriziopandini commented 5 months ago

triage-party: /remove-priority awaiting-more-evidence because we are not waiting for logs/data, but the issue needs refinement on the requirements/way forward.

/remove-triage accepted because the issue is not actionable in this state

k8s-ci-robot commented 5 months ago

@fabriziopandini: Those labels are not set on the issue: priority/

In response to [this](https://github.com/kubernetes-sigs/cluster-api/issues/9692#issuecomment-2058572311): >triage-party: >/remove-priority awaiting-more-evidence >because we are not waiting for logs/data, but the issue needs refinement on the requirements/way forward. > >/remove-triage accepted >because the issue is not actionable in this state Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

sbueringer commented 1 month ago

/help

k8s-ci-robot commented 1 month ago

@sbueringer: This request has been marked as needing help from a contributor.

Guidelines

Please ensure that the issue body includes answers to the following questions:

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed by commenting with the /remove-help command.

In response to [this](https://github.com/kubernetes-sigs/cluster-api/issues/9692): >/help Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
lubronzhan commented 1 month ago

hands up

sbueringer commented 1 month ago

Thank you!! Feel free to assign yourself the issue

enxebre commented 1 month ago

I think we could support the Graceful strategy, but this support is limited by how drain works. Notably, drain does not evict Pods managed by DaemonSets. Let's say a CAPVCD cluster, runs a DaemonSet that mounts VCD CSI-backed persistent volumes. If we use the Graceful strategy to delete the cluster, CAPI will drain each Node. However, after a Node is drained, the DaemonSet Pods continue to run, and their volumes remain mounted.

Also I understand graceful also makes the assumption there's no PDBs in the data plane? otherwise deletion is perpetually blocked.

sbueringer commented 1 month ago

Just to mention it.

Once we fixed propagation of timeouts across the board even if objects are in deleting (#10753 + e.g. Cluster topology + KCP). It might be possible to just set NodeDrainTimeout + NodeVolumeDetachTimeout to a very low value when deleting the Cluster and we end up with something very similar to "skip drain".