Closed TakShimoda closed 1 week ago
Hi! Indeed, only the robot with the lowest ID produces results, but the results will include the estimates from all robots (i.e. the merged map of all robots).
The robots need to have different positive IDs.
cslam/get_current_neighbors
is simply a request for the the neighbors list, hence the empty message. Once request received, it will be published here on the topic cslam/current_neighbors
Importantly to merge the maps, Swarm-SLAM needs to find loop closures between the maps. If there are some inter-robot loop closures found, you should see the message New inter-robot loop closure measurement
printed from here.
Hi, I do notice I get outputs for robot 0, and it seems to involve all robots. I get loop closure measurement outputs in the terminal, and the visualization does show the pose graphs being updated.
The outputs in the log folder for 3 robots looks like this:
If I match the all the timestamps in pose_timestamps0.csv (e.g. x) to the first x rows of VERTEX_SE3:QUAT I get the resulting trajectory: Although the actual trajectories are 3 concentric circles of differing radius:
So from here, my main issues are:
So I wanted to know how I can get the individual robot trajectories and timestamps, and also see how to improve the estimates as well? From figure 4 of your paper, I can see individual robot trajectories:
I just checked, and I realized I actually did get outputs for robots 1 and 2. What I noticed is for robot 1, I get the usual output, although it was fairly inaccurate. Meanwhile for robot 2, all I get for the outputs are log.csv and pose_timestamps2.csv.
So now it seems: robot 0 folder outputs all combined poses, robot 1 folder outputs its own poses, and robot 2 doesn't output its poses (possibly because it wasn't involved in an inter-robot loop closure).
Here are some functions to reverse the GTSAM id format and convert g2o files to TUM for evo https://gist.github.com/lajoiepy/10fb6a09ec23d09a85166bf8a641c7ee
As for the accuracy, it depends on many parameters. Swarm-SLAM is a framework that puts together many third party libraries, and each has its own parameters that need tuning.
For example, in your case, you should look into the Visual Place Recognition (default: https://github.com/gmberton/CosPlace) and RTAB-Map for the 3D registration. I exposed many params for those techniques in the config files, but if some that you need are not exposed, I suggest that you change them directly into the source code. For example, for RTAB-Map I expose the minimum number of keypoints inliers for the registration. Other parameters can be changed here https://github.com/lajoiepy/cslam/blob/32527430ef271064d468f442a3ab754917abb011/src/front_end/rgbd_handler.cpp#L118
Unfortunately, figuring out the right parameters for each components in order to produce high quality loop closures on your dataset can be a tedious process. Let me know if something is unclear.
Hi, sorry for the delayed response.
Thank you for the python script, it helped me create separate tum files for each robot.
As for the accuracy, outside of tweaking the parameters, I noticed that CosPlace only uses RGB images as mentioned here, but my dataset is only in monochrome images.
So I recorded both stereo and RGB images on the same camera (d435i), and fed stereo to the odometry and RGB-D to the loop closure node. I copied the rgbd yaml here to pass to the loop detection, map manger, and pose graph manger nodes, except I added a sensor_base_frame_id
field similar to the stereo yaml since it didn't have that and my frame link isn't "D435i_link"
In this case, I'm assuming I'm doing this correctly? I play back both stereo and RGB data and the loop closure node will get the RGB topic from the config yaml. I realized my depth image scale (320x240) is different from RGB (640-480) so it gave these messages:
[map_manager-2] [WARN] [1724081254.103768736] [cslam]: Depth image size (320x240) doesn't match RGB image size (640x480)!
[map_manager-2] Failed to find match for field 'x'.
[map_manager-2] Failed to find match for field 'y'.
[map_manager-2] Failed to find match for field 'z'.
[map_manager-2] Failed to find match for field 'rgb'.
Nevertheless, it did manage to close loops for intra-robot loop closure, and still recognized inter-robot loop closure but failed to transmit them so far (it gave a lot of messages mentioned failed inter-robot loop closure measurements). So I probably need to take another experiment with depth images of the same same dimension and more overlap in robot trajectories.
I had a few questions though:
sensor_type
in the yaml file to rgb, it says it's not supported:
[r1.cslam_map_manager]: Sensor type not supported: rgb
1- In the current implementation, it is assumed that the loop closing and odometry sensors are the same. For optimal accuracy, you should add this offset (extrinsics) in the loop closure transforms. However, pose graph optimization is fairly robust to small errors (such as the distance between the stereo and rgb sensors on the realsense), so you will not see much of a difference in terms of accuracy. 2- Yes the loop closures need depth or stereo, otherwise we can't get the scale information. So in your case, you could feed the stereo pairs corresponding (in time) to the RGB image used for loop closure detection.
Okay, I'll test with RGB-D loop closure again this time with the same color and RGB image sizes, and I'll try tweaking some of the threshold parameters. Tracking is fairly accurate and intra-robot loop closure works well, so I'll try to see if I can adjust some CosPlace parameters to improve the merging process for different robots. Thanks!
Hello, thank you for helping me solve #43 which was for a single robot case. I can get optimized poses as g2o files and the pose timestamps, and can combine them to create a TUM file to evaluate ATE and RPE with evo.
I am right now stuck on getting proper pose outputs for two robots. My terminals are:
ros2 launch cslam_experiments rtabmap_kitti_stereo_odometry.launch.py frame_id:=B01_link namespace:=/r0 args:=--delete_db_on_start use_sim_time:=false odom_topic:=B01_odom
ros2 launch cslam_experiments rtabmap_kitti_stereo_odometry.launch.py frame_id:=B04_link namespace:=/r1 args:=--delete_db_on_start use_sim_time:=false odom_topic:=B04_odom
ros2 launch cslam_experiments cslam_stereo.launch.py namespace:=/r0 robot_id:=0 max_nb_robots:=2 config_file:=B01_stereo.yaml
ros2 launch cslam_experiments cslam_stereo.launch.py namespace:=/r1 robot_id:=1 max_nb_robots:=2 config_file:=B04_stereo.yaml
Then I play the bag files with renamed topics and qos override just for
/tf_static
.Here is what works and doesn't work:
rtabmap_kitti_stereo_odometry.launch.py
andcslam_stereo.launch.py
for just one of the robots (r0 and r1) and play back the appropriate bag file, it works (i.e. can see visualization and proper outputs to log folder)rtabmap_kitti_stereo_odometry.launch.py
andcslam_stereo.launch.py
for both robots and play back the bag file for r0, it works.rtabmap_kitti_stereo_odometry.launch.py
andcslam_stereo.launch.py
for both robots and play back the bag file for r1, then it doesn't work, as there are no outputs. Note the main log folder is there, but it doesn't create the subfolders every 10 seconds as in the working case.I tried combing through the source code, and here's where I can trace the issue:
/<namespace>/cslam/optimized_estimates
optimizer_state_
wasn't what it should be to trigger the functions mentioned above. When checking on the correctly working examples, using some logging, their states would be: waiting->POSEGRAPH_COLLECTION->START_OPTIMIZATION-> OPTIMIZATIONoptimizer_state_
, by checking the is_optimizer() function. Here is where I check why the working examples work but the specific example I mentioned above failsHere are the different cases of what the parameters are in current_neighbors_callback and the is_optimizer() functions
*msg->origins.ids.size() = 0
, hence this loop never triggers, sois_optimizer
is never set to false (I also checkedodometry_pose_estimates_
and it's always increasing in size for every use case).So from this analysis, it simply looks like it's designed so only the lowest
robot_id
can output poses. How would I work around this? I tried all kinds of different combinations, such as having all robot_id the same but it gives an error in the neighbors_manager.py script. Obviously, other things, such as making robot_id negative also gives errors.I was also trying to debug the topic
cslam/get_current_neighbors
that current_neighbors_callback subscribes to, and is published to here but it seems confusing as it seems to publish an initializedstd_msgs::msg::String()
so I can't seem to debug its value and understand what exactly is published on the topic.I was wondering how I could get the system to work properly so I can get pose outputs for all robots.
Thanks,