Closed kamzero closed 1 year ago
+1 same question :)
Hi,
We released the original RGB and depth images, and they are not undistored. If you want you can use the calibration parameters from inside the calibrations
folder to do the undistortion.
We released the depth image at the same resolution as the color image, so the depth image is already in the same color camera coordinate. That means you don't need to use the depth_to_color transformation to transform it manually.
If you check this code, you will find how I convert the depth image to point cloud, and then fuse multi-view point clouds to one complete point cloud. And all these results are in the color camera coordinate, which means you will need the intrinsics of the color camera to do the projection in order to align them with the color image.
hope that is clear for you guys.
Thanks :D
Hello,
I'm currently working with the behave-dataset and I'm wondering if the camera calibration parameters provided with the dataset have been used to undistort the RGB and depth images. Whether rgb and depth images in sequences have been undistorted?
I noticed that your calibration.json file provides the extrinsic parameters for color_to_depth. Could you please clarify whether the RGB and depth images are aligned in this case?
Thank you for your help.