koide3 / direct_visual_lidar_calibration

A toolbox for target-less LiDAR-camera calibration [ROS1/ROS2]
https://koide3.github.io/direct_visual_lidar_calibration/
710 stars 107 forks source link

Can't understand the dense pointcloud while doing the Initial guess (Manual) #113

Open devvaibhav455 opened 2 weeks ago

devvaibhav455 commented 2 weeks ago

Describe the bug It's a question actually. I can't understand the dense pointcloud and can't associate the points with the image. Someone please help in understanding it.

To Reproduce Steps to reproduce the behavior:

  1. I am using the ouster sample dataset that is provided in the documentation.
  2. I did the preprocessing step successfully and it generated me png and ply files from the ros2 bags
  3. Try to do the initial guess (manual) (I can't do auto using superglue as I don't have enough GPU memory). It opens a photo and the dense point cloud. I can relate the points in the original ros2 bag visualized in rviz with the images in the preprocessed data but can't make sense of the dense point cloud. I tried moving the pointcloud around but can't understand anything from that as well.

Expected behavior Shouldn't the dense point cloud look similar in shape to the original but just more points.

ouster_results contains the ply and png files. ros2 run direct_visual_lidar_calibration initial_guess_manual ~/Downloads/temp/ouster_results/

Screenshots and sample data image

In rviz, the cycle stand, the little light pole, trees are visible and can be associated easily. image

Environment: Ubuntu 22.04 on amd64 architecture ROS version: ROS2 humble

koide3 commented 1 week ago

Possibly, you forgot using the dynamic points integrator. Please see https://koide3.github.io/direct_visual_lidar_calibration/example/#ouster-camera-calibration

devvaibhav455 commented 6 days ago

Thanks. I think the adv option helped