Closed AObuchow closed 2 months ago
After further consideration, removing the FailedScheduling event from the hard coded list of unrecoverable workspace pod events might not be the best approach.
If the FailedScheduling event is ignored, it'll be shown on workspace timeout:
NAME DEVWORKSPACE ID PHASE INFO
theia-next-high-cpu workspace656dfe6d86764967 Failed devworkspace failed to progress past phase 'Starting' for longer than timeout (1m). Reason: Detected unrecoverable event FailedScheduling: 0/1 nodes are available: 1 Insufficient cpu. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod..
However, if we remove the FailedScheduling event from the hard coded list of unrecoverable pod events, then it will not be shown on workspace timeout.
Our goal is to have the FailedScheduling event not cause workspace timeouts by default. This would make it easier to use cluster autoscaling in Che, and prevent workspaces from failing immediately if there are transient cluster issues.
However, we still would like the FailedScheduling event to be able to be caught (hence https://github.com/devfile/devworkspace-operator/issues/1279), and to let users know if their workspace timed out due to the FailedScheduling event.
Thus a potential alternate approach is to have the FailedScheduling event be set in the DWOC's ignoredUnrecoverableEvents by default. This would probably be accomplished through kubebuilder annotations as well as the internal default DWOC.
However, this alternate approach might not work as we need to ensure users can remove the FailedScheduling event from the default list of ignoredUnrecoverableEvents if they want to. It might be difficult/impossible to differentiate between the ignoredUnrecoverableEvents list being emptied by the user (no unrecoverable events should be ignored), and the ignoredUnrecoverableEvents list not being configured (the defaults unrecoverable events/FailedScheduling event should be ignored)
There are many cases where causing the FailedScheduling event to result in workspace failure is problematic. For example, flaky cluster infrastructure can require multiple attempts to schedule a workspace pod on a cluster. Additionally, the cluster auto-scaler can only kick in when a pod remains in the unschedulable state -- if we delete the deployment immediately after a pod is deteremiend to be unschedulable, the auto-scaler cannot kick in.
Thus we should remove the FailedScheduling event from the list of unrecoverable workspace pod events. https://github.com/devfile/devworkspace-operator/issues/1279 is required to allow users the ability to re-add the FailedScheduling event to the list of unrecoverable workspace pod events.