openshift / installer

Install an OpenShift 4.x cluster
https://try.openshift.com
Apache License 2.0
1.42k stars 1.38k forks source link

baremetal: Implement "destroy cluster" support #2005

Open russellb opened 5 years ago

russellb commented 5 years ago

Original issue: https://github.com/openshift-metal3/kni-installer/issues/74

kni-install does not yet support "destroy cluster" for baremetal clusters.

See pkg/destroy/baremetal/baremetal.go for the stub, and other implementations under pkg/destroy/ for examples of implementations on other platforms.

Having the baremetal-operator drive Ironic to destroy itself is not ideal, as we can't ensure that the cluster is actually fully destroyed. In particular, we can't drive all of the nodes through cleaning.

One way to do this would be the reverse of how Ironic moves in the cluster deployment process. We can copy all of the host information out of the cluster, shut down the baremetal-operator, and then re-launch Ironic on the provisioning host. The installer could then drive the local Ironic to ensure all hosts are deprovisioned.


@hardys : May 9-10

This is an interesting one, I'd assumed we'd run ironic on the bootstrap VM in the deploy case (where there's no external ironic, e.g on the provisioning host), but since there's no bootstrap VM on destroy that approach won't work, so I wonder if we should just run the ironic pod on the host via kni-installer in both cases?

This is actually quite tricky to implement in the same way as other platforms, because they all rely on tagging resources, then discovering all the tagged resources and deleting them. But this won't work unless we have a single long-lived ironic to maintain the state/tags.

I think we'll have to either scale down the worker machineset, kill the BMO (and hosted Ironic), then spin up another ironic to delete the masters (using details gathered from the externally provisioned BareMetalHost objects), or just grab all the BareMetalHost details, kill the BMO/Ironic, then use another/local Ironic to tear them all down.


@russellb: May 10

I think we'll have to either scale down the worker machineset, kill the BMO (and hosted Ironic), then spin up another ironic to delete the masters (using details gathered from the externally provisioned BareMetalHost objects), or just grab all the BareMetalHost details, kill the BMO/Ironic, then use another/local Ironic to tear them all down.

I agree with this.


@hardys: May 10

Ok so I think we should solve this by first fixing issue 68 (run ironic on the bootstrap VM) so we can optionally launch Ironic on the bootstrap VM via an injected manifest provided by ignition, then on destroy launch a similar VM with the same configuration (but without the bootstrap configuration).

This should mean some reuse, since we'll use the exact same pattern/config to deploy the masters and do deprovisioning on destroy, but also avoids potential complexity of running the Ironic container on the host directly (where we may want to support multiple OS options, and may not want to require host access e.g to modify firewall rules etc).

If that sounds reasonable, I'll take a look at enabling the bootstrap VM to run ironic, ideally using same/similar configuration that we enable for worker deployment in metal3-io/baremetal-operator#72


@dhellmann : May 10

How much cleaning is really involved? Could we just launch a DaemonSet to trigger wiping the partition table and then reboot the host?


@russellb : May 13

How much cleaning is really involved? Could we just launch a DaemonSet to trigger wiping the partition table and then reboot the host?

That would be simpler for sure, but the downside is the lack of any out-of-band components to verify that the cluster really has been destroyed and the process is complete.


@hardys : May 13

How much cleaning is really involved? Could we just launch a DaemonSet to trigger wiping the partition table and then reboot the host?

This may be something to discuss with product management downstream I guess, but FWIW we've already seen issues redeploying ceph on boxes where the disks aren't cleaned of metadata from previous deployments, and I was assuming there would be security/compliance reasons to prefer cleaning all the cluster data from the disks.

I also assumed we'd want all the nodes (including the masters) powered down after the destroy operation, which is probably most easily achieved using Ironic, at which point enabling cleaning on deprovision becomes easier to enable.

openshift-bot commented 4 years ago

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

openshift-bot commented 4 years ago

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten /remove-lifecycle stale

dhellmann commented 4 years ago

/remove-lifecycle rotten

stbenjam commented 4 years ago

/lifecycle frozen