doughtmw / HoloLens2-Machine-Learning

Using deep learning models for image classification directly on the HoloLens 2.
MIT License
108 stars 28 forks source link

How to unproject a 2D point in an captured image back to its accurate location in the physical world? #16

Closed NTUZZH closed 1 year ago

NTUZZH commented 1 year ago

Hi all,

I'm creating HoloLens2 software that detects corners in an image and then shows their actual location in the world. However, there may be slight deviations between the projected points and their actual location, which increase with distance from the user. As following figure shows.

image

Here's my code snippet for projecting the detected corners: I convert the corners from image to screen coordinates, then to NDC for unprojection. Next, I use the inverse of the projection matrix to convert the points from NDC to Camera space and then from Camera space to World space using the camera-to-world matrix. Finally, I instantiate the detected points using a Ray and RaycastHit.

image

For getting the cameraToWorldMatrix and projectionMatrix, i referred to XRIV work.

From my side, i guess it may be the reason that the depth information is not estimated properly or the value of cameraToWorldMatrix and projectionMatrix are not correct. However, i have made efforts on this two reasons, but failed.

May dear friends could give me some insights about how to mitigate this projecting devation, i show my great appreciation here in advance!