MixedRealityToolkit / MixedRealityToolkit-Unity

This repository holds the third generation of the Mixed Reality Toolkit for Unity. The latest version of the MRTK can be found here.
BSD 3-Clause "New" or "Revised" License
355 stars 89 forks source link

[BUG] Quest 3 Controllers Not visualized #752

Open Ali-Can-Keskin opened 2 months ago

Ali-Can-Keskin commented 2 months ago

Describe the bug

When using Quest 3 and controllers, I see hands instead of controllers. Also the rays are offset like they are coming from hands

To reproduce

Steps to reproduce the behavior:

  1. Make sure you have following features in "XR Plug-in Management/OpenXR":
    • Hand Tracking
    • Hand Tracking Subsystem
    • Meta Quest Support
  2. Make sure you have enabled Oculus Touch Controller Profile
  3. Make sure you have OpenXR as the only provider in Plug-in Management
  4. Build the sample for Android
  5. Try to use controllers

Expected behavior

When using controllers, controllers must be visualized and rays should start from controllers not hands

Your setup (please complete the following information)

Target platform (please complete the following information)

meta-meta commented 2 months ago

I have observed this as well. Regarding the pointer rays, see the discussion I posted for a description of the behavior using an alternative pose source. https://github.com/orgs/MixedRealityToolkit/discussions/729#discussion-6587873

MaxPalmer-UH commented 1 month ago

So this happens because the openxr_right/left_hand prefab uses a ControllerVisualizer- when the isTrackedaction reports that the controller is tracked - e.g. the user picks up the Quest controller - the useFallbackVisuals flag remains false because the hand aggregator can return data for the palm joint - e.g.!XRSubsystemHelpers.HandsAggregator.TryGetJoint(TrackedHandJoint.Palm, handNode, out _) returns true.

I assume this is expected behaviour in a world where controllers support returning hand tracking data for a limited set of interaction states (so fake 26 joint data). @keveleigh @AMollis would we expect this to fallback to visualizing the controller these days rather than displaying an articulated hand being driven by the controller interactions? There is a fallback controller model in this prefab, but if you force that to be shown, you still don't get to see how the user is interacting with it (e.g. no hand interacting with it, no buttons shown as pressed, etc).

meta-meta commented 1 month ago

in a world where controllers support returning hand tracking data for a limited set of interaction states

There's a relatively new feature called Capsense, which you may be aware of but for reference: https://developer.oculus.com/documentation/unity/unity-capsense/

I think this generates fake hand pose data as well as controller pose.

The UX I would expect to see with this enabled and set to Conforming To Controller is:

It is also possible to have actual hand tracking data and controllers simultaneously via Multimodal: https://developer.oculus.com/documentation/unity/unity-multimodal/

If I had a controller in only one hand, I would expect to see the above UX for my controller hand, while the non-controller hand would have a pointer ray emanating from the palm.

SashelI commented 1 month ago

I assume this is expected behaviour in a world where controllers support returning hand tracking data for a limited set of interaction states (so fake 26 joint data). @keveleigh @AMollis would we expect this to fallback to visualizing the controller these days rather than displaying an articulated hand being driven by the controller interactions? There is a fallback controller model in this prefab, but if you force that to be shown, you still don't get to see how the user is interacting with it (e.g. no hand interacting with it, no buttons shown as pressed, etc).

Hi, I ran into this issue as well, as for our app it is more useful to see the controller instead of the hand : when switching to VR, the user needs to see its controller. And what describes meta-meta is the expected behavior when using a Quest (default behavior in a unity MetaXR project).

You are talking about forcing the fallback model to show, do you know what would be the more efficient way to do that ?

MaxPalmer-UH commented 2 weeks ago

When I've most recently tested this, I see hands when controllers are dropped and then the fallback controller model when controllers are used. Talking to the other maintainers it appears to be what is intended.