MixedReality-WebRTC is a collection of components to help mixed reality app developers integrate audio and video real-time communication into their application and improve their collaborative experience
For Experimentation purposes, I've created a virtual hand within the unity environment, and have been aiming to input the virtual hand joint positions as an articulated hand into the MRTK. I've done so based on the current MRTK Leap Motion Input Provider.
So far, I've been able to create an Input Provider, and successfully raise/close the input source. I've also verified that the poses for the joints are updating correctly within the Input Provider, with the correct sources for each 'joint' to the MRTK.
However, I'm not seeing any cursors coming out of the hands (even though all my pointers are attached to the articulated hand), nor are any of my virtual 'pinch' gestures. Only the head gaze seems to be working.
Hi everyone!
For Experimentation purposes, I've created a virtual hand within the unity environment, and have been aiming to input the virtual hand joint positions as an articulated hand into the MRTK. I've done so based on the current MRTK Leap Motion Input Provider.
So far, I've been able to create an Input Provider, and successfully raise/close the input source. I've also verified that the poses for the joints are updating correctly within the Input Provider, with the correct sources for each 'joint' to the MRTK.
However, I'm not seeing any cursors coming out of the hands (even though all my pointers are attached to the articulated hand), nor are any of my virtual 'pinch' gestures. Only the head gaze seems to be working.
Is there something I'm possibly missing?
The codebase for this provider can be found here:
https://drive.google.com/drive/folders/1a81DBVNaS7W6gwd2PYVMI14nKvczHJNc?usp=sharing
Basically, the AutoHand MRTK Skeleton obtains the transforms from the virtual hands, which are then used by the provider.
Any help would be massively appreciated :(