Closed aaravrav142 closed 2 years ago
Hello Alex.
Do you mean like in https://github.com/ROBOTIS-JAPAN-GIT/turtlebot3_slam_3d/issues/9?
Ah yes! I should have checked that before. Another question: The sample bag file contained only stereo left image and depth. How are are generating the point cloud if you are not using both right and left images. I would like to take my own bag data with a RGBD sensor such as kinect on a mobile robot. Can you advice on which topics I should save for the same. Thanks
In a stereo camera the left and right images are used to generate the depth image. Since we already have the depth topic we only need a single rgb image for the point cloud reconstruction. In this case we are using the left one.
Look for topics with the same type as the ones in the example bag file. For kinect something like /depth and /image_rect if I'm not mistaken.
Cool this answered my question. Many thanks. Closing
Hi, Thanks for this great package. I would like to know if its possible to segment the grid map with classes based on the detected annotation for objects?
Thanks Alex