Closed sh-taheri closed 1 year ago
Hello! Thank you very much for your interest in our work! :)
For the first issue, it looks like SteamVR is set as your first VR environment, I think you may change it in the Oculus Software following the instructions in this image I found:
Do other applications run in Unity work in your Quest by link or air? Just to see where the problem may come from.
For 2. I will take a look into the code, there may be some bug in the Calibrator.cs script.
The current pose features come from the motion matching skeleton before the skeleton is retargeted to unity for its rendering. Therefore, the current pose features actually come from the pose in the motion matching database. As you said it is created by one user. Let me know if you need more clarification on this!
Hope I answer your questions!
Hi :)
Actually you were right, the OpenXR Runtime was set to Steam VR, but somehow by changing it to Oculus, the issue still exists, I am assuming some of my other configurations are swrong but can't manage to fix them. (but it is not a big issue for me atm)
Great, thanks!
Thanks a lot for the insights!
Thanks a lot for the great work! I tried out your project with quest 2, and I had some questions I am hoping to get answers for. (btw I am not familiar with Unity or any other game engines)
1) When I play the unity editor, and I connect the quest 2 (link or air) to the PC, I am not able to see the Game unless I open the "Steam VR" application. Do you have any thoughts on that?
2) Eventhough I press the B button (which is supposed to calibrate the height) sometimes I notice the avatar is bigger than me? Is there a minimum height setting?
I know that the body orientation prediction module is independent of the position (because it is using the velocity etc.), therefore does not depend on the size of the user, but pose information are also used in constructing the query vector, is the "Current Pose" normalized according to the height of the user? (because if not, the motion matching algorithm depends on the size of the user, and I think the motion matching database is created by one user, right?) Can you please clarify that for me?
3) Do you have any suggestions on how to incorporate an RGBD camera to further improve the accuracy of the method? Do you think the "Body direction prediction" and "current pose" blocks can be replaced with the pose estimation output of an RGBD camera? (Example: https://www.stereolabs.com/docs/unity/body-tracking/)
Thanks a lot!