introlab / rtabmap_ros

RTAB-Map's ROS package.
http://wiki.ros.org/rtabmap_ros
BSD 3-Clause "New" or "Revised" License
959 stars 558 forks source link

Question: GPS and IMU support, multiple RGB-D cameras #828

Closed carlostojal closed 9 months ago

carlostojal commented 1 year ago

Hi. First of all, sorry for asking this type of questions here, given it's not properly an issue. I have some questions:

1) Checking noetic documentation, I can't find references to any GPS and IMU topic. Was it deprecated? How did it impact mapping and graph generation and optimization? Which is the optimal way of feeding this data into RTAB-Map?

2) Which is the optimal method to use multiple RGB-D cameras on different orientations with little to no sobreposition?

Thanks in advance.

matlabbe commented 1 year ago

Hi, thx for the question, I've never updated the ros wiki for those topics. Just did! Here is what I added for imu and gps:

imu (sensor_msgs/Imu)

Can be used to add gravity constraints in the graph. The whole map will then be aligned with gravity. The quaternion should be already computed (see imu_filter_madgwick to compute it).

gps/fix (sensor_msgs/NavSatFix)

Can be used to add GPS constraints in the graph. This could eventually be used to export poses in GPS format. With Optimizer/PriorsIgnored parameter set to false, the graph can be also optimized using the GPS values. See also this page on how GPS can be used to limit loop closure detection radius.

For multi-cameras, currently rtabmap_ros supports only multi stereo cameras or multi rgbd cameras, not multi mono cameras. If you have multiple RGB-D cameras or Stereo cameras, you should use rgbd_sync or stereo_sync nodes/nodelets to synchronize image topics per camera. You can then have multiple rgbd_image topics that can be connected to rtabmap node (set parameters subscribe_rgbd and rgbd_cameras). There is an example here: demo_two_kinects.launch. To make it work properly, the TF between base link and the cameras should be accurate.

alexk1976 commented 1 year ago

@matlabbe Hello Mathieu, in previous forums you mentioned that solvePnPRansac (2D ->3D correspondence) and LBA are not supported when working with multiple RGBD cameras. Is it still the case or already supported? what should be implemented in order to support multi-cameras thoroughly?

matlabbe commented 1 year ago

PnP for multi-camera is now supported in rtabmap. Just make sure rtabmap is built with OpenGV dependency. Bundle adjustment has been also updated for multi-camera.

alexk1976 commented 1 year ago

@matlabbe great and thank you for the amazing work! is your solution assuming all sensor clocks & images HW synced? are features/points/poses matched/solved concurrently or image from each camera matched/solved separately and HW sync is not critical?

matlabbe commented 1 year ago

Assuming all cameras are hardware synchronized, but it should still work on systems where cameras are not hardware synchronized. I don't have a direct comparison though of how much accuracy is impacted.

alexk1976 commented 1 year ago

@matlabbe When we build and run it with OpenGV - we get an immediate crash in rtabmap ([ERROR] [rtabmap-7]: process has died [pid 9805, exit code -6). Tried both with single and multiple cameras over ROS2/Foxy. No issues when running without OpenGV but dual-camera accuracy is not good enough. Any direction?

matlabbe commented 1 year ago

Can you run with gdb (section A of https://github.com/introlab/rtabmap_ros/issues/28#issue-99654759) and add "--udebug" argument to rtabmap node?

EDIT: sorry the link is for ros1. At least try with --udebug argument to get more log before the crash. With rtabmap.launch.py, it can be done with:

ros2 launch rtabmap_ros rtabmap.launch.py rtabmap_args:="-d --udebug" launch_prefix:="xterm -e gdb -ex run --args"

Or if you use a Node confuguration in your launch file:

Node(
  package='rtabmap_ros', executable='rtabmap', output="screen",
  arguments="-d --udebug",
  prefix="xterm -e gdb -ex run --args",
  ...
)
alexk1976 commented 1 year ago

Crash disappeared after we switched to use /tf instead of ros topic. Now dual-camera mode is working. Evaluating accuracy and will update. thank you.

alexk1976 commented 1 year ago

@matlabbe can you explain in high-level multi-camera concept ? does rtabmap solve each camera image separately and fuse the result poses? or solving is done concurrently on both images and features from both cameras going to the solver ? thank you

matlabbe commented 1 year ago

See "Non-central absolute pose" on this page: http://laurentkneip.github.io/opengv/page_how_to_use.html . So we assume V_n is fixed (assuming cameras trigger all at the exact same time). That will give the multi-camera "PnP" transform (like output of cv::solvePnPRansac() for the single camera case), then bundle adjustment (g2o) with all cameras is applied to refine the transform (with V_n or local_transform fixed). We tried with multi-camera with approximate synchronization, the results were still good, but I don't have comparison results between a hardware synchronized cameras setup and a software synchronized setup.

dStanhill commented 1 year ago

@matlabbe As I understand it the above answer regarding multi-camera concept, refers mainly to visual odometry. What is done in local occupancy grid creation in STM and in local loop closure?

matlabbe commented 1 year ago

My comment https://github.com/introlab/rtabmap_ros/issues/828#issuecomment-1367978243 also applies for visual loop closure detection / local visual proximity detection. The local occupancy grid creation also supports multi-cameras, each camera will be projected (with or without ray tracing) to create the local grids (this also applies to OctoMap). One caveat is that ray tracing will be done from the average of viewpoints of the cameras, not the camera themselves (this would not be too difficult to implement though).

dStanhill commented 1 year ago

I understand, so you mean that using all cameras can help declare certain cells as empty in the occupancy grid map, which will be left unknown when using the average camera?

matlabbe commented 1 year ago

Screenshot from 2023-01-15 13-56-28 You can see on the right case that there will be wrongly empty cells ray traced if the cameras are located like in left image

dStanhill commented 1 year ago

Thanks

alexk1976 commented 1 year ago

@matlabbe we just discovered that we run multiple camera launch file with EstimationType=0 (3D-3D estimation) following demo_two_kinects.launch. When we just tried to set EstimationType=1 (2D-3D estimation) but rtabmap crashed. can you confirm multi-camera was tested/supposed working with EstimationType=1? We'll try to debug. thanks

matlabbe commented 1 year ago

2D->3D estimation should work with the changes from last summer (well, the integration of OpenGV was actually to do that). 3D->3D multi-camera was working before, it is using PCL.

If you have a bag and and corresponding launch file, I could try it here if I can see the same error.

alexk1976 commented 1 year ago

@matlabbe It looks crash happens sporadically on some recordings when working ok on others. Attached crash reproduction with launch file and rosbag file. instructions.txt specifies build params and the launch.

https://drive.google.com/drive/folders/1gBOcoA9rGioCVbXwQIH2xsP1TEEX9HtH?usp=sharing

Thank for for great support!

matlabbe commented 1 year ago

I tried your instructions and I didn't have a crash.

Cleaning up the parameters (see rtabmap_est_type_1_crash_param_cleanup.launch.zip):

ros2 launch rtabmap_est_type_1_crash_param_cleanup.launch.py \
    frame_id:=base_footprint \
    odom_topic:=/rtabmap/visual_odom_topic \
    qos:=1 \
    camera0:=camera0 \
    camera1:=camera1 \
    namespace:=rtabmap \
    rgbd_sync:=true \
    visual_odometry:=true \
    publish_tf_odom:=true \
    approx_sync:=true \
    rviz:=true \
    rtabmapviz:=true \
    args:="--delete_db_on_start \
        --Rtabmap/DetectionRate 1 \
        --Reg/Force3DoF true \
        --Grid/MaxObstacleHeight 1"

Screenshot from 2023-01-21 14-58-15

Default is using 2D->3D estimation. The biggest issues I see is the right camera, not sure if its orientation in TF is the right one, it gives a strange cloud at the beginning: Screenshot from 2023-01-21 15-03-51

The tilt on the front camera's static TF seems not right too: Screenshot from 2023-01-21 15-04-16

alexk1976 commented 1 year ago

hi @matlabbe , we took your params but still have sporadic crashes on this and some other clips. Verified on updated rtabmap and rtabmap_ros (latest humble-devel). Did you also test it on humble-devel? Why do think our cameras TFs/orientation are wrong? Can you elaborate what to look/check? thanks

matlabbe commented 1 year ago

Oh you are on ubuntu 22.04. Humm, I'll need to setup github actions to generate the ros2 docker images so I can try it.

Look at my last screenshot (taken with rtabmap-databaseViewer -> 3d View), this is a side view of the generate point cloud of the front camera in your robot base frame. We can clearly see that the camera is wrongly looking up, the bottom of the cloud should be aligned perfectly with the xy plane.

matlabbe commented 1 year ago

Did you apply this patch on OpenGV to disable -march=native?

matlabbe commented 1 year ago

Tried on Humble and still work without crashing. Make sure to apply the patch for opengv in my previous post.

alexk1976 commented 1 year ago

Hi @matlabbe , indeed this patch fixed the issue. Thank you! Still we see a bit better accuracy when working with single camera. Do you have any ideas what parameters to refine for dual-camera (Vis/MaxFeatures, DetectionRate - ?)

matlabbe commented 1 year ago

Well, if you didn't fix the TF of your cameras

We can clearly see that the camera is wrongly looking up, the bottom of the cloud should be aligned perfectly with the xy plane.

It will be worst for sure. Ideally, extrinsics calibration between all the cameras is required on multi-camera systems. Also, hardware synchronization between all cameras is preferable, but I have examples working fine even when using software approximate sync between the cameras. Otherwise, performance should be at least equal/often better/more robust to single camera, as the field of view is a lot larger.

julianraheema commented 1 year ago

Hi @matlabbe, I have setup my ros noetic launch file for multiple cameras (rgbd). I have two cameras, the first one is located to rear of the robot and the second one is located to the side of the robot. In the rtabmap is showing only one camera pyramid, and not two as you are showing it in the screenshot, but once a while the pyramid relocate to the second camera position for a half second then go back to the rear camera position. Any ideas? Please see my screenshot.

the second question is how many rgbd cameras can the rtabmap synch?

multi_rtabmap

Thank you

matlabbe commented 1 year ago

If rtabmapviz is subscribing to odometry, the pyramids may not be correctly displayed.

With rgbdx_sync, up to 8 cameras.

alexk1976 commented 1 year ago

Hi @matlabbe , whether GBA and other post-processing methods were implemented over multiple-cameras? thank you

matlabbe commented 1 year ago

Would have to be tested, but OptimizerG2O supports multiple cameras. The post-processing functions may not have been tested/implemented with its support.

tehkillerbee commented 1 year ago

Hi, thx for the question, I've never updated the ros wiki for those topics. Just did! Here is what I added for imu and gps:

imu (sensor_msgs/Imu) Can be used to add gravity constraints in the graph. The whole map will then be aligned with gravity. The quaternion should be already computed (see imu_filter_madgwick to compute it). gps/fix (sensor_msgs/NavSatFix) Can be used to add GPS constraints in the graph. This could eventually be used to export poses in GPS format. With Optimizer/PriorsIgnored parameter set to false, the graph can be also optimized using the GPS values. See also this page on how GPS can be used to limit loop closure detection radius.

For multi-cameras, currently rtabmap_ros supports only multi stereo cameras or multi rgbd cameras, not multi mono cameras. If you have multiple RGB-D cameras or Stereo cameras, you should use rgbd_sync or stereo_sync nodes/nodelets to synchronize image topics per camera. You can then have multiple rgbd_image topics that can be connected to rtabmap node (set parameters subscribe_rgbd and rgbd_cameras). There is an example here: demo_two_kinects.launch. To make it work properly, the TF between base link and the cameras should be accurate.

@matlabbe FYI, The gps/fix and imu does not appear to be mentioned anywhere in the documentation listed here (http://wiki.ros.org/rtabmap_odom). Is this intentional? Has support been removed?

matlabbe commented 1 year ago

@tehkillerbee Actually it has never been added to ros wiki for odometry part (it is for http://wiki.ros.org/rtabmap_slam). I just added it for odom: http://wiki.ros.org/rtabmap_odom

tehkillerbee commented 1 year ago

@matlabbe Thank you :)