Open sahilagnihotri opened 3 years ago
@sahilagnihotri The WinRT APIs for HoloLens 2 research mode and they as they are will continue to work on HoloLens 2. We don't have plan nor have enough resource to replace every single HL2 samples in the past few years with openxr APIs. We believe we've provided enough sample code for developers to learn and compose together for wide variety of scenarios. If you'd like to see specific feature coverage for OpenXR usage on HL2, let us know.
The MRTK document mentioned OpenXR will replace WinRT in the context of Unity integration, not the platform itself. MRTK in future version of Unity 2021 will be OpenXR only on HoloLens2.
I was asking it in context to custom 3D engines (they still exists :-), not unity and unreal ) as we were planning to add Hololens 2 openXR support in our dx11 and dx12 renderer.
@yl-msft I second @sahilagnihotri request. How would you adapt e. g. Microsoft's Holographic face tracking sample (https://docs.microsoft.com/samples/microsoft/windows-universal-samples/holographicfacetracking/) using the OpenXR API? I can't seem to find a way to get HoloLens 2 PV Camera pixel data using OpenXR, as some were also wondering for other devices (https://community.khronos.org/t/access-to-raw-undistored-image-from-headset-camera/107936). As I understand, this can only be achieved using proprietary vendor extensions. I'm aware of Microsoft _XR_MSFT_secondary_viewconfiguration and _XR_MSFT_first_personobserver extensions for HoloLens 2. But as outlined in the overview of the latter, "The runtime is responsible for composing the application’s rendered observer view onto the camera frame based on the chosen environment blend mode for this view configuration, as this extension does not provide the associated camera frame to the application." Scenarios involving image processing prior to hologram reprojection are numerous. Think about drawing labels as holograms following people/objects identified in the surrounding environment, for example. Or does it mean that for such use cases, we're stuck with the legacy WinRT API? How would you then interop the HolographicSpace API in the context of an OpenXR UWP app?
@emaschino , @sahilagnihotri ,
There are several different usages for the PV Camera on HoloLens 2 that's typically get confused, let me try to explain
An user can enable Mixed Reality Capture (a.k.a. MRC) in HoloLens shell, to take a photo or record a video, for example by saying "record a video" to HoloLens. This MRC video requires the application to render the holograms in the PV camera's perspective that's different from the stereo rendering of left/right eye. The rendering code of the application can leverage the XR_MSFT_secondary_view_configuration and XR_MSFT_first_person_observer extensions to render in this MRC video. This is important to have your hand tracking holograms to align with your real hand in the recorded photo/video.
Application can take images/videos from the PV camera, and process the images to identify objects (e.g. user faces) in the real world. This is what this holographicfacetracking sample was doing. For this task, the application will continue to use the MediaCapture API to start video/photo capture or using the MediaFrameReader API to get the images for computer vision tasks. This part of the API on HoloLens 2 will continue to be supported by media capture APIs as all the web cameras on Windows is using. This is not an area that OpenXR API is trying to change.
For the "raw image from headset camera" discussion, you linked above, most of the people are looking at the images from the pass-through cameras for VR devices, which is not applicable to HL2. There are also other developers looking for head tracking camera which is not available through openxr platform either. But from what I'm reading, you are not looking into these edge cases either.
I also understand the old sample code holographicfacetracking was using c++/cx, and can be hard to convert to a game engine. You will need to convert the sample code to use c++/winrt for the MediaCapture logics. Unfortunately our team don't have resource to make such conversion for you.
@yl-msft Thank you for clarifying the various usages. One thing that still puzzles me in the second use case is how do you access the frame data, such as PV camera pose? Once you got a MediaFrameReference, what's the difference between getting the associated camera pose with MediaFrameReference::CoordinateSystem() and then SpatialCoordinateSystem::TryGetTransformTo() or the approach that you described in #88? As you outlined there, We've not publish the sample code on this yet, but here are some code snippet for you to get started. We will clean up our internal sample and publish here to github soon. I can't find any update on the subject, so don't know if the proposed solution is still relevant. As also emphasized in this SO question, documents and samples about the OpenXR XR_MSFT_spatial_graph_bridge extension are a bit lacking.
Hi! Is there any plan for porting the MixedRealityToolkit and opencv code samples that are here: https://github.com/microsoft/HoloLens2ForCV/tree/main/Samples and https://github.com/microsoft/MixedRealityToolkit to work with OpenXR as the documentation says that WinRT is legacy now ?