MISTLab / Swarm-SLAM

Sparse Decentralized Collaborative Simultaneous Localization and Mapping Framework for Multi-Robot Systems
https://lajoiepy.github.io/cslam_documentation/html/index.html
MIT License
422 stars 44 forks source link

Getting outputs for multiple robots #48

Closed TakShimoda closed 1 week ago

TakShimoda commented 3 months ago

Hello, thank you for helping me solve #43 which was for a single robot case. I can get optimized poses as g2o files and the pose timestamps, and can combine them to create a TUM file to evaluate ATE and RPE with evo.

I am right now stuck on getting proper pose outputs for two robots. My terminals are: ros2 launch cslam_experiments rtabmap_kitti_stereo_odometry.launch.py frame_id:=B01_link namespace:=/r0 args:=--delete_db_on_start use_sim_time:=false odom_topic:=B01_odom ros2 launch cslam_experiments rtabmap_kitti_stereo_odometry.launch.py frame_id:=B04_link namespace:=/r1 args:=--delete_db_on_start use_sim_time:=false odom_topic:=B04_odom

ros2 launch cslam_experiments cslam_stereo.launch.py namespace:=/r0 robot_id:=0 max_nb_robots:=2 config_file:=B01_stereo.yaml ros2 launch cslam_experiments cslam_stereo.launch.py namespace:=/r1 robot_id:=1 max_nb_robots:=2 config_file:=B04_stereo.yaml

Then I play the bag files with renamed topics and qos override just for /tf_static.

Here is what works and doesn't work:

I tried combing through the source code, and here's where I can trace the issue:

Here are the different cases of what the parameters are in current_neighbors_callback and the is_optimizer() functions

Parameters for robot # msg->robots msg->origins msg->origins.ids.size() msg->origins.ids[0] msg->origins.ids[0] origin_robotid robotid is_optimizer
0 1747769088 1747769120 1 1 1 0 0 true
1 -1639485936 -1639485904 1 0 0 1 1 false, because origin_robotid > msg->origins.ids[0]

So from this analysis, it simply looks like it's designed so only the lowest robot_id can output poses. How would I work around this? I tried all kinds of different combinations, such as having all robot_id the same but it gives an error in the neighbors_manager.py script. Obviously, other things, such as making robot_id negative also gives errors.

I was also trying to debug the topic cslam/get_current_neighbors that current_neighbors_callback subscribes to, and is published to here but it seems confusing as it seems to publish an initialized std_msgs::msg::String() so I can't seem to debug its value and understand what exactly is published on the topic.

I was wondering how I could get the system to work properly so I can get pose outputs for all robots.

Thanks,

lajoiepy commented 3 months ago

Hi! Indeed, only the robot with the lowest ID produces results, but the results will include the estimates from all robots (i.e. the merged map of all robots). The robots need to have different positive IDs. cslam/get_current_neighbors is simply a request for the the neighbors list, hence the empty message. Once request received, it will be published here on the topic cslam/current_neighbors Importantly to merge the maps, Swarm-SLAM needs to find loop closures between the maps. If there are some inter-robot loop closures found, you should see the message New inter-robot loop closure measurement printed from here.

TakShimoda commented 3 months ago

Hi, I do notice I get outputs for robot 0, and it seems to involve all robots. I get loop closure measurement outputs in the terminal, and the visualization does show the pose graphs being updated.

The outputs in the log folder for 3 robots looks like this: Screenshot from 2024-08-01 14-02-36

If I match the all the timestamps in pose_timestamps0.csv (e.g. x) to the first x rows of VERTEX_SE3:QUAT I get the resulting trajectory: swarm-slam_concentric-circles Although the actual trajectories are 3 concentric circles of differing radius: Concentric circles

So from here, my main issues are:

So I wanted to know how I can get the individual robot trajectories and timestamps, and also see how to improve the estimates as well? From figure 4 of your paper, I can see individual robot trajectories: swarm-slam-figure

TakShimoda commented 3 months ago

I just checked, and I realized I actually did get outputs for robots 1 and 2. What I noticed is for robot 1, I get the usual output, although it was fairly inaccurate. Meanwhile for robot 2, all I get for the outputs are log.csv and pose_timestamps2.csv.

So now it seems: robot 0 folder outputs all combined poses, robot 1 folder outputs its own poses, and robot 2 doesn't output its poses (possibly because it wasn't involved in an inter-robot loop closure).

lajoiepy commented 2 months ago

Here are some functions to reverse the GTSAM id format and convert g2o files to TUM for evo https://gist.github.com/lajoiepy/10fb6a09ec23d09a85166bf8a641c7ee

As for the accuracy, it depends on many parameters. Swarm-SLAM is a framework that puts together many third party libraries, and each has its own parameters that need tuning.

For example, in your case, you should look into the Visual Place Recognition (default: https://github.com/gmberton/CosPlace) and RTAB-Map for the 3D registration. I exposed many params for those techniques in the config files, but if some that you need are not exposed, I suggest that you change them directly into the source code. For example, for RTAB-Map I expose the minimum number of keypoints inliers for the registration. Other parameters can be changed here https://github.com/lajoiepy/cslam/blob/32527430ef271064d468f442a3ab754917abb011/src/front_end/rgbd_handler.cpp#L118

Unfortunately, figuring out the right parameters for each components in order to produce high quality loop closures on your dataset can be a tedious process. Let me know if something is unclear.

TakShimoda commented 2 months ago

Hi, sorry for the delayed response.

Thank you for the python script, it helped me create separate tum files for each robot.

As for the accuracy, outside of tweaking the parameters, I noticed that CosPlace only uses RGB images as mentioned here, but my dataset is only in monochrome images. So I recorded both stereo and RGB images on the same camera (d435i), and fed stereo to the odometry and RGB-D to the loop closure node. I copied the rgbd yaml here to pass to the loop detection, map manger, and pose graph manger nodes, except I added a sensor_base_frame_id field similar to the stereo yaml since it didn't have that and my frame link isn't "D435i_link"

In this case, I'm assuming I'm doing this correctly? I play back both stereo and RGB data and the loop closure node will get the RGB topic from the config yaml. I realized my depth image scale (320x240) is different from RGB (640-480) so it gave these messages:

[map_manager-2] [WARN] [1724081254.103768736] [cslam]: Depth image size (320x240) doesn't match RGB image size (640x480)!
[map_manager-2] Failed to find match for field 'x'.
[map_manager-2] Failed to find match for field 'y'.
[map_manager-2] Failed to find match for field 'z'.
[map_manager-2] Failed to find match for field 'rgb'.

Nevertheless, it did manage to close loops for intra-robot loop closure, and still recognized inter-robot loop closure but failed to transmit them so far (it gave a lot of messages mentioned failed inter-robot loop closure measurements). So I probably need to take another experiment with depth images of the same same dimension and more overlap in robot trajectories.

I had a few questions though:

lajoiepy commented 2 months ago

1- In the current implementation, it is assumed that the loop closing and odometry sensors are the same. For optimal accuracy, you should add this offset (extrinsics) in the loop closure transforms. However, pose graph optimization is fairly robust to small errors (such as the distance between the stereo and rgb sensors on the realsense), so you will not see much of a difference in terms of accuracy. 2- Yes the loop closures need depth or stereo, otherwise we can't get the scale information. So in your case, you could feed the stereo pairs corresponding (in time) to the RGB image used for loop closure detection.

TakShimoda commented 2 months ago

Okay, I'll test with RGB-D loop closure again this time with the same color and RGB image sizes, and I'll try tweaking some of the threshold parameters. Tracking is fairly accurate and intra-robot loop closure works well, so I'll try to see if I can adjust some CosPlace parameters to improve the merging process for different robots. Thanks!