Open hai-h-nguyen opened 3 years ago
The main difference I see is that 2 independent realsense2_camera are launched with ros approach, while in the standalone we enable the streams of both camera at the same time in one API call. This could help for time synchronization. Otherwise, I would need to do more tests.
To set the transform between the cameras in the standalone, see this parameter:
Hi @matlabbe ,
I want to ask whether the mapping performance between the two options is similar? In my case, it seems that using the stand-alone map, the quality is better even I have a correct transformation matrix between the two camera for the ROS case. I use a D435i and a T265 camera. I just follow the commands in wiki to launch the two sensors and rtabmap:
roslaunch realsense2_camera rs_d400_and_t265.launch
roslaunch rtabmap_ros rtabmap.launch args:="-d --Mem/UseOdomGravity true --Optimizer/GravitySigma 0.3" odom_topic:=/t265/odom/sample frame_id:=t265_link rgbd_sync:=true depth_topic:=/d400/aligned_depth_to_color/image_raw rgb_topic:=/d400/color/image_raw camera_info_topic:=/d400/color/camera_info approx_rgbd_sync:=false visual_odometry:=false
Btw, how to fill in the transformation matrix when using the stand-alone rtabmap with the two above cameras?