Open maekjuplease opened 2 weeks ago
Can you give example/screenshot of what you mean by "two cameras detect the lines"?
For the masked images, what are they?
Can you give example/screenshot of what you mean by "two cameras detect the lines"?
There are some lines on the floor surface, I would like to segment those lines out and use those segmented lines for the localization and mapping. Two cameras are attached in front and rear of the robot to detect the segmented lines.
For the masked images, what are they?
Masked images are images only contains the lines
Here is the attachment.
Based on those pictures, it seems that the textured floor has actually good visual features to track. Removing them would make localization/visual odometry worst.
To use a mask like this to limit features extraction, you can apply it to the depth image. By default rtabmap won't extract features when there is no valid depth.
There are some lines on the floor surface, I would like to segment those lines out and use those segmented lines for the localization and mapping.
I feel that if the environment is structured with lines and you can detect them quite accurately, I would suggest to try 2D lidar-based SLAM/localization approaches by converting the lines seen by he camera into a fake laser scan.
A similar option would be to use icp_odometry using a 2D PointCloud2 scan_cloud
(not laserscan to capture lines behind other lines) generated from those lines, by setting scan_cloud_is_2d:=true
. A downside is that ICP may fail to find good correspondences if only two parallel lines are seen at the same time.
Hi,
I really admire your work and currently I am working on the multiple cameras setup for SLAM and localization. The difficult thing of this is that two cameras detect the lines and the masked images are used to generate point clouds and use those point clouds for localization and map building. Could you give me some advice?
Thank you so much.