MixedRealityToolkit / MixedRealityToolkit-Unity

This repository holds the third generation of the Mixed Reality Toolkit for Unity. The latest version of the MRTK can be found here.
BSD 3-Clause "New" or "Revised" License
376 stars 97 forks source link

UI Elements as a SOURCE for Input Actions #307

Open IssueSyncBot opened 1 year ago

IssueSyncBot commented 1 year ago

Original issue opened by:

@jbienzms @jbienzms


Describe the problem

As we're currently planning MRTK 3.0, I wanted to revisit a topic that I started back in 2019 with Issue #4006. Namely, developers need a way to handle the intent of a user regardless of how the user specified their intent. For example, the application might have a Save intent, and that intent could be triggered several ways:

  1. A voice command with the phrase "Save"
  2. A physical button on a controller
  3. A UI button in a holographic toolbar

Currently only 2 of the above (voice and controller) can generate an Input Action. UI buttons cannot serve as the source of an Input Action, they can only serve as the target.

With #4475 we added the ability for MonoBehaviors to handle Input Actions regardless of which device (or source) raised them. But this pattern breaks down when it comes to UI buttons. The only way for a MonoBehavior to handle Input Actions and UI buttons is to subscribe to both. Or, the Input Action can trigger the button and the MonoBehavior can subscribe to the button click. But this forces the developer to follow one path when a UI button exists and another pattern when a UI button doesn't exist. This is not ideal. It also doesn't work with other UI controls.

Describe the solution you'd like

The pattern I'd like to see MRTK 3 offer would be similar to the Command Pattern in WPF. With the Command Pattern, any UI element can be associated with a command.

<StackPanel>
  <Menu>
    <MenuItem Command="ApplicationCommands.Save" />
  </Menu>
</StackPanel>

Or

<StackPanel>
    <Button Command="ApplicationCommands.Save" />
</StackPanel>

WPF even allows keyboard shortcuts to be bound to commands:

<Window.InputBindings>
  <KeyBinding Key="S"
              Modifiers="Control" 
              Command="ApplicationCommands.Save" />
</Window.InputBindings>

Then, in code-behind, the developer only has to handle the Save command once. This works regardless of what UI element or keyboard shortcut triggered the command.

I'm not necessarily suggesting that MRTK should follow the exact same ICommand pattern, but with data binding that's actually a possibility. Even without ICommand, if UI elements could generate Input Actions then a single handler would be possible.

There is one very awesome thing about the ICommand Pattern which Input Actions can't match: ICommand can report when it's available or not available (CanExecute). ICommand can also report when it its availability changes (CanExecuteChanged). This allows all UI elements bound to the command to automatically enable and disable themselves accordingly.

Describe alternatives you've considered

I've previously tried two alternatives:

  1. Have a UI button receive the Input Action and do all handling on Button_Click. This only works with a Button and code has to change if the Button is removed.
  2. I wrote a behavior that listens for Button_Click and actually generates an Input Action. This works extremely well, but it's only in my app and no one else has the ability to use it. I'd happily contribute this code if interested.

Additional context

This has been requested by a few folks in the MRPP which is why I added the ISV tag. Please feel free to reach out to me for details on who is requesting.


ISSUE MIGRATION

Issue migrated from: https://github.com/microsoft/MixedRealityToolkit-Unity/issues/10175

IssueSyncBot commented 1 year ago

Original comment by:

@jbienzms jbienzms


I assigned this to @CDiaz-MS since I know she's looking at interactions for MRTK 3. But I also wanted it to be on the radar for @Zee2, @cre8ivepark and @BillingsAm3. Thanks for considering this feature!

IssueSyncBot commented 1 year ago

Original comment by:

@Zee2 Zee2


Hey @jbienzms, it would be great to have this in MRTK3. The approach we would probably like to take would be for the user to define a Unity Input System Input Action ahead of time, and then attach a component to the button that would construct a mock control, bind it to that action, and invoke it at runtime, as described here: https://rene-damm.github.io/HowDoI.html#set-an-actions-value-programmatically