Closed xiaolongrenlzj closed 3 years ago
Hi,
the holographic remoting stack does all these details in the internal implementation. We have dedicated code for nearly all features that we remote. Head poses are for example a very special thing we need to treat carefully to get to the quality level that we can enable with remoting.
So all of that is "just there" for our users. The idea is that remoting is transparent to the application, so if you use remoting and then the normal Windows Mixed Reality or OpenXR APIs to get head poses, view matrices etc. you just "magically" get the right poses from the remote side. No additional work needed from your side.
I'd recommend that you take a look at our samples so that you can see how things are set up correctly and what you need to do (not much, basically ;))
Hi, I'm trying to figure out how data is transmitted between the sample player and the remote OpenXR app. Like what does the OpenXR runtime use to send the remotely rendered frames? Is there a send queue for that? And how are they received by the sample player side? I also would like to know if there are some default data channels for sending and receiving data like view and projection transform after the connection is established if I do not custom data channels.
Thank you so much!