microsoft / HoloLensForCV

Sample code and documentation for using the Microsoft HoloLens for Computer Vision research
MIT License
475 stars 154 forks source link

Short depth camera's focal length ? #63

Open SongYx1995 opened 6 years ago

SongYx1995 commented 6 years ago

Hi all, I can get the depth image from HoloLens, but i need to convert it to 3D point cloud for other purposes, thus i want get the focal_x, focal_y, and u0, v0?

Anyone can tell me the value of those parameters,or any method?

FracturedShader commented 5 years ago

That data is, unfortunately, not available. You have to use MapImagePointToCameraUnitPlane for each depth pixel to get the XY directions that the pixel would be projected in (Z is always 1, since it's projecting to the unit plane). Plug those values into a 3D vector to get (x, y, 1), and then normalize it. From there you can use the depth values multiplied by the depth scalar, multiplied again by the matching vector to create a point. Through testing, I found discarding depth values above 0xFF0 would reliably cut off points that didn't capture. This point cloud will only be in camera capture space though, not world space. Correlating the different captures and transforming them all is a harrowing process though.

Huangying-Zhan commented 5 years ago

Hi @AlexSyx , @FracturedShader has provided a good guideline for getting the 3D point cloud. Basically you need to use the unprojection mapping. To make it more specific, I will provide a long_throw_depth example about getting the 3D point cloud. After saving the recording, you will get the following data.

In the binary file, it basically saves (u, v) of the unit plane. If you want to get a 3D point, it is [X,Y,Z] = Z * [u, v, 1]. Now, I am going to explain how to read this (u,v).

Suppose we have an image where the pixel coordinate of top-left is [y,x]=[0,0], In this binary file, it saves the unprojection mapping (datatype: float32) for each pixel in the following order. u{0,0}, v{0,0}, u{1,0}, v{1,0}, …, u{H-1,0}, v{H-1,0}, u{0,1}, v{0,1}, …, u{H-1,1}, v{H-1,1},... u{0,W-1}, v{0, W-1}, …, u{H-1, W-1}, v{H-1,W-1} You can refer to here: https://github.com/Microsoft/HoloLensForCV/blob/87c5eeb436ae909894a8049cb2584e60dcad13b0/Shared/HoloLensForCV/SensorFrameRecorder.cpp#L243

Knowing the way that unprojection mapping is saved, we can read the mapping (u,v) now. Here is a python sample code that you can used to read the unprojection mapping.

import numpy as np
def get_cam_space_projection(projection_bin, depth_h, depth_w):
    # read binary file
    projection = np.fromfile(projection_bin, dtype = np.float32)
    x_list = [projection[i] for i in range(0,len(projection),2)]
    y_list = [projection[i] for i in range(1,len(projection),2)]

    # rearrange as array
    u = np.asarray(x_list).reshape(depth_w,depth_h).T
    v = np.asarray(y_list).reshape(depth_w,depth_h).T

    return [u, v]

Here is how u and v look like. image image Note that, Yellow side is positive while blue side is negative. White area is invalid region.

Now, we can try to get 3D points. First, please note that the coordinate system used in research mode is shown here. image

Suppose we already have a depth map, Z, (-ve values) , we can get the 3D points by [Z*u, Z*v, Z] in the coordinate system described above. Basically, this is an example of getting 3D points.

(Optional) Depth map issues The long_throw_depth getting from the recorder is actually not really the Z values but distance(D). A simple conversion is required. Z = D / sqrt(u^2+v^2+1) To get the correct 3D points in the above coordinate system, remember to add -ve sign to Z.

Hope this helps!

maurosyl commented 5 years ago

Update: fixed the problem, It was matlab cruelly rounding my u,v matrices' values. I am still curious about intrinsics parameters, if i wanted to retrieve them could i just use the 2D-3D correspondence i found?

Hi everyone, i am trying to recover the intrinsics parameters for the depth sensor and i have been following the instruction on @Huangying-Zhan's answer. I used the suggested python code to recover u and v matrices and then wrote a little matlab function to get the points cloud from the depth frame. The problem is that the points cloud i get doesn't resemble the original scene in the frame at all:

                                           The depth frame

depthjpg

                                          The relative Points cloud ( Blue is near, Yellow is far)

pcloud

Here is the matlab function, did i do something wrong?

function [points_list] =  uv2pointscloud( u_mat, v_mat, Dframe) 

 for  i = 1 : 450

       for j = 1 : 448

          if Dframe(i,j) > 64000           

              effDframe(i,j) = 0;
           else 
               effDframe(i,j) = Dframe(i,j);
           end
     end 
 end

for  i = 1 : 450
      for j = 1 : 448
           Unscaled_effZframe(i,j) = effDframe(i,j)/ sqrt(u_mat(i,j)^2 + v_mat(i,j)^2 + 1);
      end
end

effZframe = Unscaled_effZframe/1000;

k = 1;
tic
for  i = 1 : 450
       for j = 1 : 448
           if effZframe(i,j) ~= 0
               points_list( k , 1) =  u_mat(i,j)*effZframe(i,j); 
               points_list( k , 2) =  v_mat(i,j)*effZframe(i,j);
               points_list( k , 3) =  effZframe(i,j);
               k = k+1;
           end
       end
end
toc

end

Also, I understand from your answers that the only way to get the intrinsics parameters of the camera is to use u and v to find the 3D points and then compute the intrinsics matrix from the 3D - 2D correnspondence, is that right?

zwz14 commented 5 years ago

If you are using short throw depth data, you should take data in the range of 200 to 1000 as valid. After you hidden the invalid ones, maybe you can get right 3D point cloud of near scene like your hand.

streamwill commented 5 years ago

Hello, I build the project [Recorder] of HololensForCV to get the depth sensor data,but the results .CSV file and .TAR package is empty, 1 2 Does anyone know what is going on? Thanks very much! By the way The project[SensorStreamViewer] is working. snipaste_2018-12-01_16-44-06

ahojnnes commented 5 years ago

@Liebewill please upgrade to the latest Windows Version on HoloLens and checkout the latest commits of this repository. There was an incompatibility with the latest update vs. usage of the API in this repository. It should be fixed now.

alemarro commented 5 years ago

Hi all, my question is: does read_sensor_poses implemented in recorder_console.py (https://github.com/Microsoft/HoloLensForCV/blob/master/Samples/py/recorder_console.py) actually gives the absolute camera pose, i.e. the transformation from the world to the camera coordinate system?

I already have the point cloud in the frame coordinate system. I got this from using the (.bin) projection*(-depth)/1000 Now I am trying to use the inverse of the poses from that code (read_sensor_poses) multiplied by the 3D points, but the point clouds are not fitting. What am I doing wrong? Thanks

vitcozzolino commented 5 years ago

@mauronano How did you solve the problem with the messed-up point cloud? I'm having the same issue but i don't think it's a rounding problem, at least in my case. Do you have some hints? I can post the code if necessary.

Edit: I think I found out what's happening. In my original picture there are a lot of reflective surfaces (like 2 monitors and a whiteboard) which maybe messed up the depth map. I'm just guessing as I'm a beginner in this field.

FracturedShader commented 5 years ago

@vitcozzolino, you are correct. In addition to reflective surfaces, anything with a black material/coating does not capture well either (as it absorbs infrared).

cyrineee commented 5 years ago

@mauronano how did get the depth from long_throw_depth.csv ?there are too many features in the csv file (which one ?)

maurosyl commented 5 years ago

@cyrineee you don't get the depth from the .csv file. If you run the recording app you get the depth data in the form of grayscale images arranged in folders like in the picture, every pixel of these images is a measure of the distance of some "obstacle" in the depth camera field of view. Each depth map (i.e. each grayscale image) comes with a timestamp which u can then look up to in the csv file to get additional information on each particular frame (like the orientation of the camera when the frame was shot and some other camera parameters) Recording

cyrineee commented 5 years ago

@mauronano Thaaaaaaaaaaaaaaaaaaanks a lot for your explanations ! I should download files from the recoder (windows portal device of the explorer) then i use .pgm or .ppm to get the distance ? should i use this script for the depth and the distance ?: https://github.com/Microsoft/HoloLensForCV/commit/1ff2d0dd3dd6063165cae593408022a345498872#diff-104676d8e0f74131a6b3e4a7352c4bcb

Thanks in advance !