kubevirt / web-ui

OpenShift Cluster Console UI
https://www.openshift.org
Apache License 2.0
26 stars 11 forks source link

[OKD-Core] Add Node maintenance action #328

Closed rawagner closed 5 years ago

rawagner commented 5 years ago

ref https://github.com/kubevirt/web-ui-components/pull/404

rawagner commented 5 years ago

maintenance

I didn't include info about automatically rebuilding a host data after 30minutes as Im not sure thats true for nodes. I also added Maintenance reason text field as we can specify it in NodeMaintenance CR. But we do not surface that information in UI anywhere - @lizsurette any ideas ? Should it be shown after clicking on Under maintenance status ?

andybraren commented 5 years ago

Please correct me if I'm wrong, but my understanding is that the "Start Maintenance" action will only be available to hosts in Metal³, and not nodes?

Without the data rebuilding logic, this modal looks like what I would imagine a "Drain Node" modal would look like that triggers something like kubectl drain $nodename. That's probably fine for OKD, but IIRC draining a node without the data rebuilding logic isn't recommended in Metal³ since moving storage pods immediately could be a very long and expensive operation.

Regardless of that confusion, here's a mockup of how we could show a Maintenance reason for hosts in the UI. I agree that showing it when clicking the Under maintenance status is the best option. If no reason is given, nothing is shown in that area of the popover.

2b-1a-maint-modal

2a-2b-maint-status-messages

lizsurette commented 5 years ago

Thanks for the quick, thorough, response, @andybraren ! I especially like the addition of where that maintenance reason would be surfaced since I had a similar question to what @rawagner brought up there...why ask if we don't show it?!

Please correct me if I'm wrong, but my understanding is that the "Start Maintenance" action will only be available to hosts in Metal³, and not nodes?

So I think since putting a Host into Maintenance Mode technically puts a Node into "Unschedulable" there have been discussions about going ahead and allowing a user to just perform this Maintenance action from the Nodes table too. The motivation being that if a user looks to the Nodes table to do this and can't find it then they could be lost. All of this of course still hints at needing to fix the Host Machine Node relationship in the future so this is more of a temporary solution hopefully :)

jelkosz commented 5 years ago

So I think since putting a Host into Maintenance Mode technically puts a Node into "Unschedulable" there have been discussions about going ahead and allowing a user to just perform this Maintenance action from the Nodes table too. The motivation being that if a user looks to the Nodes table to do this and can't find it then they could be lost. All of this of course still hints at needing to fix the Host Machine Node relationship in the future so this is more of a temporary solution hopefully :)

actually, the temporary solution is the one on the machines screen since it really only puts the node into Unschedulable and the kubevirt (depends on how is it configured) may make the VMs to migrate away and thats it.

This here is the correct longer term solution which calls the node maintenance operator which correctly evicts everything. Not sure about ceph data rebuild though, @MarSik do you know if that happens to that?

mareklibra commented 5 years ago

Let's continue with follow-up. Basic functionality is there and will be nice to have it in the next release.