microsoft / HoloLens2ForCV

Sample code and documentation for using the Microsoft HoloLens 2 for Computer Vision research.
MIT License
490 stars 145 forks source link

How to get aligned depth? #50

Open mianbao96 opened 3 years ago

mianbao96 commented 3 years ago

Hello, I would like to consult you some question about how to get aligned depth. Now I want to capture RGB-D data through HoloLens 2, the sensor stream used is PV+Depth Long Throw or VLC LF+Depth Long Throw, since I don't know the one-to-one mapping relationship between the pixels in depth image and the pixels in RGB or Gray image, I can only use the sensor poses to find some sparse correspondences through backprojection and reprojection. So I want ask how can I directly get the dense aligned depth image? Or will you plan to open the mapping table of pixels between the depth image and rgb(or gray) images?

fbogo commented 3 years ago

Hello, apologies but I'm not sure I understand well what "dense aligned depth image" means. Are any of the python scripts provided here useful for you?

mianbao96 commented 3 years ago

@fbogo Thank you for your reply, I am sorry that I did not express it very clearly. Actually what I want to get is the RGB-Depth or Gray-Depth image pair, which are in the same resolution and are one-to-one in pixels. Thus, given a specific pixel coordinate (u,v), I can get the corresponding color value [RGB(u,v)] and depth value [Depth(u,v)]. If there is no interface to complete this, Can I get the pixel coordinate mapping relationship between RGB-Depth or Gray-Depth, thus, given a specific pixel coordinate (u,v) in RGB or Gray image, I can got the pixel coordinate (u',v') of the corresponding pixel in Depth image.

mianbao96 commented 3 years ago

@fbogo I'm really sorry to bother you again. Please reply to my questions when you are free. Thank you very much.

fbogo commented 3 years ago

Would e.g. the python code here: https://github.com/microsoft/HoloLens2ForCV/blob/main/Samples/StreamRecorder/StreamRecorderConverter/save_pclouds.py (from line 132 on) help you for your task? It uses a virtual pinhole camera model.

ishanic commented 3 years ago

Reaching out on this thread since I believe I have a similar issue. I have a (x,y) coordinate in PV, and I am looking for the corresponding coordinate in the depth image. Is there a PV-to-depth transform that could be directly applied to obtain this correspondence. A look up table kind of framework would be very useful. The pinhole projection does indeed provide aligned maps, but in a distorted space.

michalt38 commented 1 year ago

@ishanic @mianbao96 Did you manage to solve the problem?

ccrop commented 1 year ago

Also focus on this requirement