Some thoughts about reintroducing the Desktop (keyboard & mouse) controller, which has been on the list since the XRIT upgrade...
Goals
To provide an intuitive interface for 2D desktop users
Transparent to developers using the XRIT (i.e. drives XR Interactables without any additional setup)
Scope
Should work with PC, WebGL & WebXR (outside of VR/AR mode)
Touchscreens are out of scope
Should support all the interactions in the XR Interactable Toolkit Demo scene sample:
Grab
Poke
Gaze
UI
Comparison with Device Simulator
The Component is not a replacement for the XR Device Simulator. Whereas the Device Simulator can emulate all typical XR Controller Inputs, the Desktop interface will be more limited. It will only support the minimum input required to engage the Interactables mentioned above (that is, trigger the Hover, Select and Activate events).
It is expected applications intending to support the Desktop interface are implicitly developed with this in mind, by sticking to Interactable setups that are amenable to being driven with a keyboard and mouse, even if the Desktop controller means no active development effort or additional control flow needs to be configured.
Other applications will still need the XR Device Simulator to emulate more complex interactions, such as multi-interactor behaviour.
Proposed User Story
If an XR device is active, it overrules the Desktop interface. However, if an XR Device is connected but inactive, the Desktop interface should function automatically without any configuration change (1).
Users interact from a first person perspective. Using WSAD will translate the viewpoint. Holding the Right Mouse Button while not over an Interactable will allow rotating the Camera, otherwise the cursor behaves normally (2).
Users use the mouse both to look around and Select or Grab items. Items are activated with the Left Mouse Button, and selected with the Right Mouse Button (CTRL + Mouse Button on Mac). Moving the cursor over an item should activate Hover. Ideally, items should respond to Hover themselves (e.g. with an XRInteractableAffordanceStateProvider), but we will also colour the reticle so that it responds to to Interactable objects, in the way the current Ray Interactor does.
When the Right Mouse button is activated over an Interactable, it is locked and will not change the orientation until released.
If an Interactable is selected, like below, we will show a spline between the cursor and the focal point, to emulate static or heavy items.
Users can interact with world-space UI Canvas elements by clicking normally, and text boxes by typing.
(1) This is so users can use the Desktop interface to prototype their scenes with headsets connected, instead of having to keep putting on and taking off a headset.
(2) The reason for masking the orientation change is so users can use the cursor to more easily interact with on screen keyboards and other UI elements.
An alternative to clutching, is that the cursor by default rotates the camera, but stops moving when over a UX element. This might be quite nice in XR focused apps, but could also become odd if the cursor ends up far away from the center of the screen.
Implementation Options
Almost every interaction can be achieved with the XR Device Simulator's Left Controller and Grab or Trigger button emulation.
One possibility then is to create our own InputDevice subclasses and hook into the InputSystem in the same way the device simulator does. One problem with this is that the behaviour when having multiple XR devices added appears to be undefined. For example, we can easily take control of the hand from the device simulator with an XR controller, but not vice versa.
Another possibility is to manually trigger the Input Action References on the XR Controller scripts. This will provide more control and also mean we can repurpose the existing hands.
Another possibility is to introduce a new GameObject in parallel to the hands, which maintains its own Interactor components, set up specifically for the desktop case. The advantage of this is that it would probably be easier to set up dedicated visuals and configurations more suitable for the Desktop interfaces. We would have to worry about fighting over control of the Hand Controllers, because the XR Input Device can track the hands, and the Desktop component can have its own GameObject to deal with.
For locomotion, the Desktop interface would translate the XR rig, and orient the head using the Transforms.
Some thoughts about reintroducing the Desktop (keyboard & mouse) controller, which has been on the list since the XRIT upgrade...
Goals
Scope
Comparison with Device Simulator
The Component is not a replacement for the XR Device Simulator. Whereas the Device Simulator can emulate all typical XR Controller Inputs, the Desktop interface will be more limited. It will only support the minimum input required to engage the Interactables mentioned above (that is, trigger the
Hover
,Select
andActivate
events).It is expected applications intending to support the Desktop interface are implicitly developed with this in mind, by sticking to Interactable setups that are amenable to being driven with a keyboard and mouse, even if the Desktop controller means no active development effort or additional control flow needs to be configured.
Other applications will still need the XR Device Simulator to emulate more complex interactions, such as multi-interactor behaviour.
Proposed User Story
If an XR device is active, it overrules the Desktop interface. However, if an XR Device is connected but inactive, the Desktop interface should function automatically without any configuration change (1).
Users interact from a first person perspective. Using WSAD will translate the viewpoint. Holding the Right Mouse Button while not over an Interactable will allow rotating the Camera, otherwise the cursor behaves normally (2).
Users use the mouse both to look around and Select or Grab items. Items are activated with the Left Mouse Button, and selected with the Right Mouse Button (CTRL + Mouse Button on Mac). Moving the cursor over an item should activate
Hover
. Ideally, items should respond toHover
themselves (e.g. with anXRInteractableAffordanceStateProvider
), but we will also colour the reticle so that it responds to to Interactable objects, in the way the current Ray Interactor does.When the Right Mouse button is activated over an Interactable, it is locked and will not change the orientation until released.
If an Interactable is selected, like below, we will show a spline between the cursor and the focal point, to emulate static or heavy items.
Users can interact with world-space UI Canvas elements by clicking normally, and text boxes by typing.
(1) This is so users can use the Desktop interface to prototype their scenes with headsets connected, instead of having to keep putting on and taking off a headset. (2) The reason for masking the orientation change is so users can use the cursor to more easily interact with on screen keyboards and other UI elements.
An alternative to clutching, is that the cursor by default rotates the camera, but stops moving when over a UX element. This might be quite nice in XR focused apps, but could also become odd if the cursor ends up far away from the center of the screen.
Implementation Options
Almost every interaction can be achieved with the XR Device Simulator's Left Controller and Grab or Trigger button emulation.
One possibility then is to create our own
InputDevice
subclasses and hook into theInputSystem
in the same way the device simulator does. One problem with this is that the behaviour when having multiple XR devices added appears to be undefined. For example, we can easily take control of the hand from the device simulator with an XR controller, but not vice versa.Another possibility is to manually trigger the Input Action References on the XR Controller scripts. This will provide more control and also mean we can repurpose the existing hands.
Another possibility is to introduce a new GameObject in parallel to the hands, which maintains its own Interactor components, set up specifically for the desktop case. The advantage of this is that it would probably be easier to set up dedicated visuals and configurations more suitable for the Desktop interfaces. We would have to worry about fighting over control of the Hand Controllers, because the XR Input Device can track the hands, and the Desktop component can have its own GameObject to deal with. For locomotion, the Desktop interface would translate the XR rig, and orient the head using the Transforms.