Closed keveleigh closed 9 months ago
Could we move to using a MixedRealityTransform
, or perhaps a bespoke hand-joint-specific struct, or some other breaking change on 3.x? @RogPodge @keveleigh what is driving the 2.x urgency on this?
what is driving the 2.x urgency on this?
This has been a feature requested for a while from the platform and various partners. Essentially, it's platform data that we've inadvertently hidden in MRTK. I had thought we added it for 2.7 but turns out not!
If we wanted to move to using a MixedRealityTransform
for the events, we could make it configurable in the Hand Tracking profile and default to the current behavior.
We'd also still likely want some sort of polling support though (the HandJointUtils
path, though I think that currently relies on an interface defined with MixedRealityPose
s...)
We appreciate your feedback and thank you for reporting this issue.
Microsoft Mixed Reality Toolkit version 2 (MRTK2) is currently in limited support. This means that Microsoft is only fixing high priority security issues. Unfortunately, this issue does not meet the necessary priority and will be closed. If you strongly feel that this issue deserves more attention, please open a new issue and explain why it is important.
Microsoft recommends that all new HoloLens 2 Unity applications use MRTK3 instead of MRTK2.
Please note that MRTK3 was released in August 2023. It features an all-new architecture for developing rich mixed reality experiences and has a minimum requirement of Unity 2021.3 LTS. For more information about MRTK3, please visit https://www.mixedrealitytoolkit.org.
Thank you for your continued support of the Mixed Reality Toolkit!
Describe the problem
Many platforms, like WMR and OpenXR, provide a radius for each hand joint in addition to the position/rotation. We should expose this somewhere for developers.
Describe the solution you'd like
I see a couple ways we could go about this:
scale
toMixedRealityPose
which is the type we currently use to represent hand joints.MixedRealityTransform
, so this would cause some redundancy and potential confusion. We also can't just switch to usingMixedRealityTransform
for the existing events, since that'd be a breaking change for anybody currently using the events.HandJointUtils
to allow for hand joint radius access. This would probably be the least invasive, as we can add this as a new method easily. The remaining piece would be figuring out the right way to obtain the hand joint radius data from the data providers. Today we use theIMixedRealityHand
interface to obtain aMixedRealityPose
for a specific hand joint, which again goes back to entry 1 on why there's not a clear solution (and changing the interface would be a breaking change).