xiexh20 / behave-dataset

Code to access BEHAVE dataset, CVPR'22
https://virtualhumans.mpi-inf.mpg.de/behave/
Other
141 stars 6 forks source link

Alignment of color and depth #36

Open XuM007 opened 2 months ago

XuM007 commented 2 months ago

Thank you very much for the method of making the dataset, but I'm having some problems making the data myself. I use the color, depth in the BEHAVE data and the color in the calibration (cx, cy, fx, fy, k1~k6, p1, p2). Project the depth to pcd, and use color img to color the point cloud. The results are as follows, you can see very good results. image

But when I record the kinect data on my own, I get the following results. image

You can see that the color of part of the human body is on the wall, which means that there is a problem with the corresponding depth of this part of the color, that is, there is a misalignment between the color and the depth. My color(get_color_image()) and depth(get_transformed_depth_image()) are read using kinect built-in functions, including all parameters used, which are also from kinect SDK(get_calibration()).

So I want to ask a few questions:

  1. Are the color and depth in your data read directly from kinect (similar to the function I use)? Or complete the calibration of color and depth through parameters (calibration.json's color, colortodepth, depth)?
  2. Do you use additional calibration methods? Or do all matrix parameters come from kinect's SDK? If there is additional calibration, how is it done?
  3. The issue of color and depth registration is also discussed here. As of 2021 it remains unanswered. Does your device have this kind of problem? If not, do you have any suggestions to solve the problem I am facing?

Looking forward to your reply