metal3-io / baremetal-operator

Bare metal host provisioning integration for Kubernetes
Apache License 2.0
570 stars 252 forks source link

Feature Request "Forced BMH deletion" #1666

Open matthewei opened 5 months ago

matthewei commented 5 months ago

What steps did you take and what happened: [A clear and concise description on how to REPRODUCE the bug.]

matthew@ubuntu:~/github/metal3_dev/baremetal-operator|main⚡ ⇒  kubectl get bmh -A              
NAMESPACE   NAME                   STATE                        CONSUMER   ONLINE   ERROR   AGE
default     cmss-baremetalhost-1   powering off before delete              true             2m58s
matthew@ubuntu:~/github/metal3_dev/baremetal-operator|main⚡ ⇒  

What did you expect to happen:

Delete the BMH directly!

Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]

Environment:

/kind bug

agewagra commented 5 months ago

Try to use patch and delete finalizers. BMH will be deleted

dtantsur commented 5 months ago

Try to use patch and delete finalizers.

Please, PLEASE, never recommend anyone to delete finalizers, especially if they don't know 100% what they are doing. In this case, you can and probably will leave a dangling Node in Ironic.

dtantsur commented 5 months ago

On the topic of the request: I think we miss a way to force-delete BareMetalHosts. The reason BMO works the way it works is to try leave the host in a safe state. E.g. a powered on worker can try to rejoin the cluster, messing up CAPI.

Rozzii commented 5 months ago

I have renamed this issue to represent an actionable feature request. I agree with @dtantsur that we don't have forced BMH deletion and that could come handy for many users. /triage accepted /kind feature

matthewei commented 5 months ago

I have renamed this issue to represent an actionable feature request. I agree with @dtantsur that we don't have forced BMH deletion and that could come handy for many users. /triage accepted /kind feature

yes, we need this feature

metal3-io-bot commented 2 months ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues will close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle stale

Rozzii commented 2 months ago

/remove-lifecycle stale.

Rozzii commented 2 months ago

/remove-lifecycle stale