Open Ali-Can-Keskin opened 2 months ago
I have observed this as well. Regarding the pointer rays, see the discussion I posted for a description of the behavior using an alternative pose source. https://github.com/orgs/MixedRealityToolkit/discussions/729#discussion-6587873
So this happens because the openxr_right/left_hand
prefab uses a ControllerVisualizer
- when the isTracked
action reports that the controller is tracked - e.g. the user picks up the Quest controller - the useFallbackVisuals
flag remains false because the hand aggregator can return data for the palm joint - e.g.!XRSubsystemHelpers.HandsAggregator.TryGetJoint(TrackedHandJoint.Palm, handNode, out _)
returns true.
I assume this is expected behaviour in a world where controllers support returning hand tracking data for a limited set of interaction states (so fake 26 joint data). @keveleigh @AMollis would we expect this to fallback to visualizing the controller these days rather than displaying an articulated hand being driven by the controller interactions? There is a fallback controller model in this prefab, but if you force that to be shown, you still don't get to see how the user is interacting with it (e.g. no hand interacting with it, no buttons shown as pressed, etc).
in a world where controllers support returning hand tracking data for a limited set of interaction states
There's a relatively new feature called Capsense, which you may be aware of but for reference: https://developer.oculus.com/documentation/unity/unity-capsense/
I think this generates fake hand pose data as well as controller pose.
The UX I would expect to see with this enabled and set to Conforming To Controller
is:
It is also possible to have actual hand tracking data and controllers simultaneously via Multimodal: https://developer.oculus.com/documentation/unity/unity-multimodal/
If I had a controller in only one hand, I would expect to see the above UX for my controller hand, while the non-controller hand would have a pointer ray emanating from the palm.
I assume this is expected behaviour in a world where controllers support returning hand tracking data for a limited set of interaction states (so fake 26 joint data). @keveleigh @AMollis would we expect this to fallback to visualizing the controller these days rather than displaying an articulated hand being driven by the controller interactions? There is a fallback controller model in this prefab, but if you force that to be shown, you still don't get to see how the user is interacting with it (e.g. no hand interacting with it, no buttons shown as pressed, etc).
Hi, I ran into this issue as well, as for our app it is more useful to see the controller instead of the hand : when switching to VR, the user needs to see its controller. And what describes meta-meta is the expected behavior when using a Quest (default behavior in a unity MetaXR project).
You are talking about forcing the fallback model to show, do you know what would be the more efficient way to do that ?
When I've most recently tested this, I see hands when controllers are dropped and then the fallback controller model when controllers are used. Talking to the other maintainers it appears to be what is intended.
Describe the bug
When using Quest 3 and controllers, I see hands instead of controllers. Also the rays are offset like they are coming from hands
To reproduce
Steps to reproduce the behavior:
Expected behavior
When using controllers, controllers must be visualized and rays should start from controllers not hands
Your setup (please complete the following information)
Target platform (please complete the following information)