introlab / rtabmap

RTAB-Map library and standalone application
https://introlab.github.io/rtabmap
Other
2.84k stars 786 forks source link

Where to Integrate a body filter into custom build #1301

Open spelletier1996 opened 5 months ago

spelletier1996 commented 5 months ago

I would like to integrate a body filter into rtabmap in a custom fork, the body filter will take in the most recent point cloud frame, remove the body points and then return a filtered cloud. Ideally this would work for both the point cloud generated by a stereo camera input and the a direct scan cloud input.

Could someone point me towards were in the source this would be best implemented? Or at the very least which where the best places to get started to understand the flow data flow from input scan_cloud or stereo_images to point cloud being served to the algos for mapping.

matlabbe commented 5 months ago

If you work with depth images, you could set 0 depth to mask the body in the image, then feed that updated depth image to rtabmap instead of the original one.

For general point clouds, you may filter the point cloud first, then feed the filtered cloud to rtabmap.

For implementation, the easiest would be to create your own node to filter to mask the depth image or filter the point cloud of your lidar. From robot_description, you may get the 3D models for each TF link, then use TF to know where the 3D models are at a specific time and reproject them in camera you want to filter.

For inspiration, you may take a look at camera_self_filter, rgbd_self_filter or robot_body_filter packages. See also:

spelletier1996 commented 5 months ago

I'm currently trying to integrate the robot_body_filter but have run into some issues, when providing a scan_cloud input to rtabmap and still attempting to use the stereo odometry (as ICP doesnt seem to work with the point cloud generated from the cameras)

When I provide the raw stereo images + stereo odom I'm able to map relatively well, but when I generate the point cloud separately (Using the stereo image pipeline) and then feed it into the robot_body_filter and then into rtabmap, I cant get any mapping to happen. I can see the point cloud in rviz and the odom quality is 500+ but the cloud_map just doesnt build.

Whenever I move my robot the area that was being mapped is removed. I've set ""Rtabmap/TimeThr": "0"," so I don't think thats the issue.

Edit: Is it possible that the robot remaining stationary while the camera moves (head of robot has a camera in it, all being updated through TF transforms) is causing an issue?

Any thoughts as to where I should be investigating?

matlabbe commented 4 months ago

When I provide the raw stereo images + stereo odom I'm able to map relatively well, but when I generate the point cloud separately (Using the stereo image pipeline) and then feed it into the robot_body_filter and then into rtabmap, I cant get any mapping to happen.

Are you feeding to rtabmap a filtered point cloud or a filtered depth image aligned with left stereo image? I recommend the second approach to feed RGB + filtered depth images instead of stereo images. Don't use ICP with stereo point clouds.

If the robot (i.e., the frame_id set to rtabmap) doesn't move, by default we don't accumulate new frames in the map. You can set RGBD/LinearUpdate to 0 (disabled) to still add these frames to map when the robot is not moving (just the camera).

The TF between the camera and the base frame (frame_id) can move no problem, just be aware of the effect of RGBD/LinearUpdate when robot doesn't move.