NVlabs / intrinsic3d

Intrinsic3D - High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting (ICCV 2017)
https://vision.in.tum.de/_media/spezial/bib/maier2017intrinsic3d.pdf
BSD 3-Clause "New" or "Revised" License
451 stars 80 forks source link

Depth to color alignment. #8

Closed abuzaina closed 4 years ago

abuzaina commented 4 years ago

Hi, I am using Intrinsic3D on my dataset, my depth maps and rgb images are not aligned, i.e there is 4x4 transformation matrix between the depth sensor and the camera. My question is how to take account of this?

robmaier commented 4 years ago

Hi, the simplest way would probably be to pre-align/register the depth maps. Roughly said, implement something similar to the following steps:

MaxChanger commented 4 years ago

Hello @robmaier and @abuzaina , I am very clear about your discussion, but for the dataset provided in datasets/intrinsic3d, such as tomb-statuary-rgbd.zip, I did not find the extrinsic 4x4 transformation parameters of depth and color camera. Where could I find the transformation parameter? Thanks.

robmaier commented 4 years ago

Hi @MaxChanger, the answer is simply: there are no static extrinsic 4x4 transformation parameters between depth and color supported, the previous comment explains how to pre-register depth to color. The color camera poses are stored in the frame-XXXXXX.pose.txt files in the Intrinsic3D dataset sequences.

MaxChanger commented 4 years ago

@robmaier Thank you for your prompt reply.

As mentioned in the second item above,

  • warp/transform these 3D points into color camera frame using extrinsic 4x4 transformation.

I think the 3D points is transformed to color frame in order to get the RGB information of the point cloud, but the frame-XXXXXX.pose.txt is the transformation between color camera and world coordinate rather than the transformation between color and depth camera. So I have no idea about how to get the color information of point cloud.

robmaier commented 4 years ago

The warping procedure only applies if you work with your own data, then you need to manually preprocess your RGB-D frame to be aligned/pre-registered. In that case, you also need to of course know your own extrinsic calibration beforehand. I can basically only say: Intrinsic3D reads datasets in its own format, and you have to take care yourself of how to convert your data into the Intrinsic3D format.

MaxChanger commented 4 years ago

What I want to do is to use your dataset for some other reconstruction algorithms tests, the data I collected using Kinect by myself has been properly processed, because I know the relative position between the color and the depth camera. Without the extrinsic parameters of color and depth camera, I don't think it will be able to get the color point cloud. In that case, I'll give up the idea. Thank you for your discussion and support.

robmaier commented 4 years ago

Well, pre-registered means here that the extrinsic transformation between color and depth is the identity matrix. (Although this is not perfectly true in practice due to the imperfect factory calibration between depth and color) I.e. the depth pixel and its 3D point can be looked up at the same pixel in the color image, it's just that simple.

MaxChanger commented 4 years ago

I see what you mean, pre-registered means that the color and depth image is already aligned, so I can use directly. Thank you.