Describe the bug
It's a question actually. I can't understand the dense pointcloud and can't associate the points with the image. Someone please help in understanding it.
To Reproduce
Steps to reproduce the behavior:
I am using the ouster sample dataset that is provided in the documentation.
I did the preprocessing step successfully and it generated me png and ply files from the ros2 bags
Try to do the initial guess (manual) (I can't do auto using superglue as I don't have enough GPU memory). It opens a photo and the dense point cloud. I can relate the points in the original ros2 bag visualized in rviz with the images in the preprocessed data but can't make sense of the dense point cloud.
I tried moving the pointcloud around but can't understand anything from that as well.
Expected behavior
Shouldn't the dense point cloud look similar in shape to the original but just more points.
ouster_results contains the ply and png files.
ros2 run direct_visual_lidar_calibration initial_guess_manual ~/Downloads/temp/ouster_results/
Screenshots and sample data
In rviz, the cycle stand, the little light pole, trees are visible and can be associated easily.
Environment:
Ubuntu 22.04 on amd64 architecture
ROS version: ROS2 humble
Describe the bug It's a question actually. I can't understand the dense pointcloud and can't associate the points with the image. Someone please help in understanding it.
To Reproduce Steps to reproduce the behavior:
Expected behavior Shouldn't the dense point cloud look similar in shape to the original but just more points.
ouster_results
contains the ply and png files.ros2 run direct_visual_lidar_calibration initial_guess_manual ~/Downloads/temp/ouster_results/
Screenshots and sample data
In rviz, the cycle stand, the little light pole, trees are visible and can be associated easily.
Environment: Ubuntu 22.04 on amd64 architecture ROS version: ROS2 humble