Note how the behavior is different depending on how the user has configured the control. This event happens to only be needed if the control is configured to show the predictive navigation overlay, so all of the behavior is behind a guard on this condition. Further, the exact response to the event is governed by the "action-mode" the user has set. If it's press-release, then we need to listen for a release event so we can stop the action. If it's click-click, then we need to determine which click the event represents (the initiating or the terminating click), and then act appropriately.
The behavior for all "overlay modes" and all intersecting "action modes" are tangled here simply because we'd prefer to only register a single listener for the mouse down event. But his tangling creates significant potential for bugs (since code must be conditioned carefully to fire only in particular configurations), makes bugs harder to catch (since they're buried amongst a mass of code that doesn't or shouldn't fire). This makes it difficult to introduce new modes.
We should disentangle listening for an event from actually processing it. The simplest refactoring can be to isolate how the current handler would behave for each and every given mode permutation and create separate handler functions that represent each of them. Whenever the user changes their configuration, we can point the event listener at the appropriate handler function. These individual handlers will be easier to write and to maintain. There are places where a handler registers a second handler temporarily. These would need to be changed to a small class that is registered as the handler for both events, then internally determines when and how it should respond to either of them.
Many of these handlers share a common structure (e.g. they trigger a behavior on the first event, then disable it the second or whenever some other event is fired), so in the long term we'd want a parametric representation so we can compactly specify them. The state machine representation we've chatted about seems like the cleanest way here. We'd want to be able to write small, arbitrary state machine templates that could be instantiated for each UI component based on the user configuration. Once configured, each state machine declares what events it needs to consume and someone on the outside is responsible for feeding these to it. It governs how the UI component it owns is transformed in response to events as well as what commands get emitted.
As motivating context, here is how a
mousedown
event is interpreted for the overhead display overlay: https://github.com/hcrlab/stretch_web_interface/blob/38fc0183f1846d2d498f7c45f896532225fbb341/src/pages/operator/ts/operator.cmp.ts#L882-L936Note how the behavior is different depending on how the user has configured the control. This event happens to only be needed if the control is configured to show the predictive navigation overlay, so all of the behavior is behind a guard on this condition. Further, the exact response to the event is governed by the "action-mode" the user has set. If it's press-release, then we need to listen for a release event so we can stop the action. If it's click-click, then we need to determine which click the event represents (the initiating or the terminating click), and then act appropriately.
The behavior for all "overlay modes" and all intersecting "action modes" are tangled here simply because we'd prefer to only register a single listener for the mouse down event. But his tangling creates significant potential for bugs (since code must be conditioned carefully to fire only in particular configurations), makes bugs harder to catch (since they're buried amongst a mass of code that doesn't or shouldn't fire). This makes it difficult to introduce new modes.
We should disentangle listening for an event from actually processing it. The simplest refactoring can be to isolate how the current handler would behave for each and every given mode permutation and create separate handler functions that represent each of them. Whenever the user changes their configuration, we can point the event listener at the appropriate handler function. These individual handlers will be easier to write and to maintain. There are places where a handler registers a second handler temporarily. These would need to be changed to a small class that is registered as the handler for both events, then internally determines when and how it should respond to either of them.
Many of these handlers share a common structure (e.g. they trigger a behavior on the first event, then disable it the second or whenever some other event is fired), so in the long term we'd want a parametric representation so we can compactly specify them. The state machine representation we've chatted about seems like the cleanest way here. We'd want to be able to write small, arbitrary state machine templates that could be instantiated for each UI component based on the user configuration. Once configured, each state machine declares what events it needs to consume and someone on the outside is responsible for feeding these to it. It governs how the UI component it owns is transformed in response to events as well as what commands get emitted.