Closed anjelomerte closed 6 months ago
Hello,
1) PV+AHAT depth+Spatial Input+Audio should run smoothly. Video streams are the most demanding and in our tests, PV+AHAT works OK.
2) I think it should work but I have not tested the plugin in Unity 2022 so I am not sure. If there are any issues hopefully it will only require some small changes to hl2ss.cs.
3) Yes, Windows is supported.
4) Streaming is controlled by the client. See the example Python scripts. We use client.open()
, client.get_next_packet()
, and client.close()
to start the stream, acquire frames, and stop streaming. For our project, we run a Unity application (with the plugin) on the HoloLens and then we run our Python code on a desktop PC to capture and process HoloLens data.
5) To get 2D gaze coordinates, we do raycasting with the Spatial Mapping mesh to get the ray length like the StreamRecoder sample from the HoloLens2ForCV does.
Thank you for your input, will try it out then!
Hi jdibenes! Thank you for making your work open source. It helps the research community very much!
Personally, I am currently capturing sensor data and storing it locally on the HL2. I would love to test out your implementation. Before I do that, I just have a few questions to clarify whether I understand the capabilities of your implementation correctly. Having in mind that I would like to access multiple sensor streams such as front camera footage (+audio), depth data, hand-/head-/eye tracking data at once:
Thank you very much for your work again! Looking forward to your feedback! Best regards