Open sawan-kcl opened 6 days ago
You can look at this example: https://github.com/introlab/rtabmap_ros/blob/57dd787a589ab40c75b43f1ed72084858440f107/rtabmap_examples/launch/realsense_d400.launch.py#L31-L34
Note also if you launch on same computer than the camera, I wouldn't use compressed
images.
Hello @matlabbe ,
Thanks a lot for your quick response. I tried to run the node as you mentioned and i don't seem to be getting any error but i can't see the outputs in rtabmap_viz
and there's no message while echoing the odom topic.
These are the parameter settings as logged while running the node:
[rgbd_odometry-1] [INFO] [1720090074.234554887] [rgbd_odometry]: Odometry: frame_id = camera_frame
[rgbd_odometry-1] [INFO] [1720090074.234733665] [rgbd_odometry]: Odometry: odom_frame_id = odom
[rgbd_odometry-1] [INFO] [1720090074.234747071] [rgbd_odometry]: Odometry: publish_tf = true
[rgbd_odometry-1] [INFO] [1720090074.234755326] [rgbd_odometry]: Odometry: wait_for_transform = 0.100000
[rgbd_odometry-1] [INFO] [1720090074.234773817] [rgbd_odometry]: Odometry: log_to_rosout_level = 4
[rgbd_odometry-1] [INFO] [1720090074.234798657] [rgbd_odometry]: Odometry: initial_pose = xyz=0.000000,0.000000,0.000000 rpy=0.000000,-0.000000,0.000000
[rgbd_odometry-1] [INFO] [1720090074.234808284] [rgbd_odometry]: Odometry: ground_truth_frame_id =
[rgbd_odometry-1] [INFO] [1720090074.234816541] [rgbd_odometry]: Odometry: ground_truth_base_frame_id = camera_frame
[rgbd_odometry-1] [INFO] [1720090074.234824628] [rgbd_odometry]: Odometry: config_path =
[rgbd_odometry-1] [INFO] [1720090074.234832724] [rgbd_odometry]: Odometry: publish_null_when_lost = true
[rgbd_odometry-1] [INFO] [1720090074.234840322] [rgbd_odometry]: Odometry: publish_compressed_sensor_data = false
[rgbd_odometry-1] [INFO] [1720090074.234848013] [rgbd_odometry]: Odometry: guess_frame_id =
[rgbd_odometry-1] [INFO] [1720090074.234855414] [rgbd_odometry]: Odometry: guess_min_translation = 0.000000
[rgbd_odometry-1] [INFO] [1720090074.234864376] [rgbd_odometry]: Odometry: guess_min_rotation = 0.000000
[rgbd_odometry-1] [INFO] [1720090074.234873144] [rgbd_odometry]: Odometry: guess_min_time = 0.000000
[rgbd_odometry-1] [INFO] [1720090074.234881399] [rgbd_odometry]: Odometry: expected_update_rate = 0.000000 Hz
[rgbd_odometry-1] [INFO] [1720090074.234889952] [rgbd_odometry]: Odometry: max_update_rate = 0.000000 Hz
[rgbd_odometry-1] [INFO] [1720090074.234898649] [rgbd_odometry]: Odometry: min_update_rate = 0.000000 Hz
[rgbd_odometry-1] [INFO] [1720090074.234907107] [rgbd_odometry]: Odometry: wait_imu_to_init = false
[rgbd_odometry-1] [INFO] [1720090074.234915249] [rgbd_odometry]: Odometry: sensor_data_compression_format = .jpg
[rgbd_odometry-1] [INFO] [1720090074.234922957] [rgbd_odometry]: Odometry: sensor_data_parallel_compression = true
[rgbd_odometry-1] [INFO] [1720090074.234934301] [rgbd_odometry]: Odometry: stereoParams_=0 visParams_=1 icpParams_=0
[rgbd_odometry-1] [INFO] [1720090074.246270161] [rgbd_odometry]: RGBDOdometry: approx_sync = false
[rgbd_odometry-1] [INFO] [1720090074.246478842] [rgbd_odometry]: RGBDOdometry: queue_size = 5
[rgbd_odometry-1] [INFO] [1720090074.246494496] [rgbd_odometry]: RGBDOdometry: qos = 0
[rgbd_odometry-1] [INFO] [1720090074.246504469] [rgbd_odometry]: RGBDOdometry: qos_camera_info = 0
[rgbd_odometry-1] [INFO] [1720090074.246513804] [rgbd_odometry]: RGBDOdometry: subscribe_rgbd = false
[rgbd_odometry-1] [INFO] [1720090074.246522562] [rgbd_odometry]: RGBDOdometry: rgbd_cameras = 1
[rgbd_odometry-1] [INFO] [1720090074.246531092] [rgbd_odometry]: RGBDOdometry: keep_color = false
[rgbd_odometry-1] [INFO] [1720090074.250130032] [rgbd_odometry]:
[rgbd_odometry-1] rgbd_odometry subscribed to (exact sync):
[rgbd_odometry-1] /camera/color/image/compressed,
[rgbd_odometry-1] /camera/depth/image/compressed,
[rgbd_odometry-1] /camera/color/image/camera_info
Could you please help or point me towards any documentation for this? Also, any particular reason i shouldn't be using compressed images? Does it affect the visual odometry (other than obviously having less feature points, so maybe a little lower accuracy)?
The documentation for ROS2 has not been ported from ROS1 yet. However, we use exactly the same topic name and parameters as ROS1, so you can go on http://wiki.ros.org/rtabmap_ros/noetic_and_newer to see what parameters mean and what topics the nodes are taking.
From:
[rgbd_odometry-1] rgbd_odometry subscribed to (exact sync):
[rgbd_odometry-1] /camera/color/image/compressed,
[rgbd_odometry-1] /camera/depth/image/compressed,
[rgbd_odometry-1] /camera/color/image/camera_info
If rgbd_odometry doesn't output any odometry and output warnings saying it didn't receive topics, you may try with approx_sync:=true instead.
You also have to confirm that these topics are actually published:
rostopic hz /camera/color/image/compressed
rostopic hz /camera/depth/image/compressed
rostopic hz /camera/color/image/camera_info
Make also sure that subscribers and publishers of these topics are using same qos
, you can verify with:
rostopic info /camera/color/image/compressed --verbose
Note also that if you are in simulation, make sure you use use_sim_time:=true
.
One last note, it seems that image_transport
support for rtabmap nodes is kinda broken under ROS2 port. You will need to add an image_transport
republisher to convert your compressed
topics to raw
topics, then connect the outputs to rgbd_odometry
. See this issue for more info: https://github.com/introlab/rtabmap_ros/issues/1181
Hello, I'm trying to use this package to do VIO on a robot operating in simulated micro gravity environment. I plan to fuse IMU with VO output from Rtabmap through 'robot_localization' package and then use the filtered Odometry ouput for mapping. My simulated sensor gives compressed RGB and Depth image outputs. Could you please tell me how to set-up or launch the nodes for this? I'm sorry I'm a little lost as i couldn't find any specific documentation for this on ROS2 platform. My launch file looks like this which is based on the launch file given for ROS2 turtlebot3 RGBD demo.
Overall, I just need to know how to set it up for VO output and use it for mapping. Thanks in advance.