mgaoling / mpl_calibration_toolbox

An easy calibration toolbox for VECtor Benchmark
https://star-datasets.github.io/vector/calibration/
BSD 3-Clause Clear License
24 stars 3 forks source link

Depth reprojection seems misaligned with event data #1

Closed umgnunes closed 1 year ago

umgnunes commented 1 year ago

Hello! Thank you for the VECtor Dataset and all the related tools made available.

I tried to align the undistorted depth maps with the undistorted events (from the left camera), using the k4a_projector tool, but the results do not seem accurate enough. These are a few examples obtained on the robot_normal sequence, whereby the undistorted events are imposed on the reprojected undistorted depth maps:

Frame 1 Frame 1

Frame 14 Frame 14

You can see that the events do not seem to be correctly aligned with the depth maps. Any help is welcomed. Thanks

mgaoling commented 1 year ago

Hello. Thank you for your interest in our benchmark.

We had observed this issue even before recording and there are a few possible reasons for this misalignment, including:

  1. inaccurate extrinsic calibration between the event camera and the depth camera
  2. inaccurate depth readings from the Kinect camera
  3. camera motion and temporal misalignment
  4. other potential factors

Based on our camera extrinsic calibration tool, we can rule out the first possible reason for the misalignment. If you run the tool we provided, you will find that the average reprojection error is less than 1 pixel (if I recall correctly).

To investigate further, we conducted an experiment where the sensor suite remained stationary in front of the scene. We have attached a PDF file containing the results, where we overlaid the projected depth readings onto one of the image frames. Please note that the pink areas, visible in some object boundaries and in the vignette area, indicate places without depth readings.

depth_misalignment_comparsion_results.pdf

In the scenes labeled "still", you can still observe some misalignment on the edges, such as the boundary of the robot and the border of the desk. We suspect that the primary reason for this is the second one, as similar issues have been raised by other Kinect users. Please refer to the following links for more information.

https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1058

https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1201

Lastly, we recommend that you consider using a contour matching scheme to improve the alignment between depth readings and event readings, as we did in a few of our research projects.

umgnunes commented 1 year ago

Thanks for the reply.

Lastly, we recommend that you consider using a contour matching scheme to improve the alignment between depth readings and event readings, as we did in a few of our research projects.

It would be great if you could make these tools available or, even better, provide the aligned readings like, e.g., MVSEC and DSEC do, so researchers have a common ground when using your dataset.

Rainlv commented 1 week ago

Hello, I found TUM RGBD dataset meet the same promble with inaccurate depth readings from Kinect, and they found there exists a constant scaling factor with the depth reading which could be calibrated, will you have plan to do that? image

you can find the screenshot from the Calibration of the depth images in TMU DataSet Link page.

mgaoling commented 1 week ago

I believe the TUM RGB-D dataset is based on the Kinect v1, which was released in the early 2010s. This version uses an infrared light pattern projection to capture depth. In contrast, the Kinect sensor used by us utilizes a time-of-flight sensor for depth calculation.