Closed sawan-kcl closed 4 months ago
You can look at this example: https://github.com/introlab/rtabmap_ros/blob/57dd787a589ab40c75b43f1ed72084858440f107/rtabmap_examples/launch/realsense_d400.launch.py#L31-L34
Note also if you launch on same computer than the camera, I wouldn't use compressed
images.
Hello @matlabbe ,
Thanks a lot for your quick response. I tried to run the node as you mentioned and i don't seem to be getting any error but i can't see the outputs in rtabmap_viz
and there's no message while echoing the odom topic.
These are the parameter settings as logged while running the node:
[rgbd_odometry-1] [INFO] [1720090074.234554887] [rgbd_odometry]: Odometry: frame_id = camera_frame
[rgbd_odometry-1] [INFO] [1720090074.234733665] [rgbd_odometry]: Odometry: odom_frame_id = odom
[rgbd_odometry-1] [INFO] [1720090074.234747071] [rgbd_odometry]: Odometry: publish_tf = true
[rgbd_odometry-1] [INFO] [1720090074.234755326] [rgbd_odometry]: Odometry: wait_for_transform = 0.100000
[rgbd_odometry-1] [INFO] [1720090074.234773817] [rgbd_odometry]: Odometry: log_to_rosout_level = 4
[rgbd_odometry-1] [INFO] [1720090074.234798657] [rgbd_odometry]: Odometry: initial_pose = xyz=0.000000,0.000000,0.000000 rpy=0.000000,-0.000000,0.000000
[rgbd_odometry-1] [INFO] [1720090074.234808284] [rgbd_odometry]: Odometry: ground_truth_frame_id =
[rgbd_odometry-1] [INFO] [1720090074.234816541] [rgbd_odometry]: Odometry: ground_truth_base_frame_id = camera_frame
[rgbd_odometry-1] [INFO] [1720090074.234824628] [rgbd_odometry]: Odometry: config_path =
[rgbd_odometry-1] [INFO] [1720090074.234832724] [rgbd_odometry]: Odometry: publish_null_when_lost = true
[rgbd_odometry-1] [INFO] [1720090074.234840322] [rgbd_odometry]: Odometry: publish_compressed_sensor_data = false
[rgbd_odometry-1] [INFO] [1720090074.234848013] [rgbd_odometry]: Odometry: guess_frame_id =
[rgbd_odometry-1] [INFO] [1720090074.234855414] [rgbd_odometry]: Odometry: guess_min_translation = 0.000000
[rgbd_odometry-1] [INFO] [1720090074.234864376] [rgbd_odometry]: Odometry: guess_min_rotation = 0.000000
[rgbd_odometry-1] [INFO] [1720090074.234873144] [rgbd_odometry]: Odometry: guess_min_time = 0.000000
[rgbd_odometry-1] [INFO] [1720090074.234881399] [rgbd_odometry]: Odometry: expected_update_rate = 0.000000 Hz
[rgbd_odometry-1] [INFO] [1720090074.234889952] [rgbd_odometry]: Odometry: max_update_rate = 0.000000 Hz
[rgbd_odometry-1] [INFO] [1720090074.234898649] [rgbd_odometry]: Odometry: min_update_rate = 0.000000 Hz
[rgbd_odometry-1] [INFO] [1720090074.234907107] [rgbd_odometry]: Odometry: wait_imu_to_init = false
[rgbd_odometry-1] [INFO] [1720090074.234915249] [rgbd_odometry]: Odometry: sensor_data_compression_format = .jpg
[rgbd_odometry-1] [INFO] [1720090074.234922957] [rgbd_odometry]: Odometry: sensor_data_parallel_compression = true
[rgbd_odometry-1] [INFO] [1720090074.234934301] [rgbd_odometry]: Odometry: stereoParams_=0 visParams_=1 icpParams_=0
[rgbd_odometry-1] [INFO] [1720090074.246270161] [rgbd_odometry]: RGBDOdometry: approx_sync = false
[rgbd_odometry-1] [INFO] [1720090074.246478842] [rgbd_odometry]: RGBDOdometry: queue_size = 5
[rgbd_odometry-1] [INFO] [1720090074.246494496] [rgbd_odometry]: RGBDOdometry: qos = 0
[rgbd_odometry-1] [INFO] [1720090074.246504469] [rgbd_odometry]: RGBDOdometry: qos_camera_info = 0
[rgbd_odometry-1] [INFO] [1720090074.246513804] [rgbd_odometry]: RGBDOdometry: subscribe_rgbd = false
[rgbd_odometry-1] [INFO] [1720090074.246522562] [rgbd_odometry]: RGBDOdometry: rgbd_cameras = 1
[rgbd_odometry-1] [INFO] [1720090074.246531092] [rgbd_odometry]: RGBDOdometry: keep_color = false
[rgbd_odometry-1] [INFO] [1720090074.250130032] [rgbd_odometry]:
[rgbd_odometry-1] rgbd_odometry subscribed to (exact sync):
[rgbd_odometry-1] /camera/color/image/compressed,
[rgbd_odometry-1] /camera/depth/image/compressed,
[rgbd_odometry-1] /camera/color/image/camera_info
Could you please help or point me towards any documentation for this? Also, any particular reason i shouldn't be using compressed images? Does it affect the visual odometry (other than obviously having less feature points, so maybe a little lower accuracy)?
The documentation for ROS2 has not been ported from ROS1 yet. However, we use exactly the same topic name and parameters as ROS1, so you can go on http://wiki.ros.org/rtabmap_ros/noetic_and_newer to see what parameters mean and what topics the nodes are taking.
From:
[rgbd_odometry-1] rgbd_odometry subscribed to (exact sync):
[rgbd_odometry-1] /camera/color/image/compressed,
[rgbd_odometry-1] /camera/depth/image/compressed,
[rgbd_odometry-1] /camera/color/image/camera_info
If rgbd_odometry doesn't output any odometry and output warnings saying it didn't receive topics, you may try with approx_sync:=true instead.
You also have to confirm that these topics are actually published:
rostopic hz /camera/color/image/compressed
rostopic hz /camera/depth/image/compressed
rostopic hz /camera/color/image/camera_info
Make also sure that subscribers and publishers of these topics are using same qos
, you can verify with:
rostopic info /camera/color/image/compressed --verbose
Note also that if you are in simulation, make sure you use use_sim_time:=true
.
One last note, it seems that image_transport
support for rtabmap nodes is kinda broken under ROS2 port. You will need to add an image_transport
republisher to convert your compressed
topics to raw
topics, then connect the outputs to rgbd_odometry
. See this issue for more info: https://github.com/introlab/rtabmap_ros/issues/1181
Hello @matlabbe ,
I used your suggestions from #1181 to successfully republish the topics in raw format but now i get the error:
[ERROR] [1720557746.870152649] [rgbd_odometry]: Input type must be image=mono8,mono16,rgb8,bgr8,bgra8,rgba8 and image_depth=32FC1,16UC1,mono16. Current rgb=bgr8 and depth=bgr8
which is telling me that there's expected data type mismatch however, i can also see that bgr8 is one of the accepted data types for RGB so I'm assuming problem is with depth image. How can i ensure it is of any of these 32FC1,16UC1,mono16
formats and which is the best suggested format?
I must mention that my compressed depth image is sensor_msgs/msg/Image
type and of jpeg
compression, so I had used
ros2 run image_transport republish compressed raw --ros-args -r in/compressed:=/camera/depth/image/compressed -r out:=/camera/depth/image
instead of -
ros2 run image_transport republish compressedDepth raw --ros-args -r in/compressedDepth:=/depth/image/compressed -r out:=/depth/image
using the later is throwing following error:
[ERROR] [1720692790.544086623] [image_republisher]: SubscriberPlugin::subscribeImpl with five arguments has not been overridden
terminate called after throwing an instance of 'std::runtime_error'
what(): Unknown encoding jpeg
[ros2run]: Aborted
Also, i wanted to know how does setting higher value of approx_sync_max_interval
affect the SLAM and VO output?
Thanks a lot in advance for your patience and help.
[UPDATE] : I was able to get raw depth image in 32FC1 format using openCV. Haven't used it for VO yet, will update here as soon as I do.
Hello, The Visual Odometry and SLAM mapping is not working for me. Here is the launch file I'm using:
from launch import LaunchDescription
from launch.actions import DeclareLaunchArgument, SetEnvironmentVariable
from launch.substitutions import LaunchConfiguration
from launch_ros.actions import Node
def generate_launch_description():
parameters=[{
'frame_id':'camera_frame',
'use_sim_time':True,
'approx_sync':True,
'approx_sync_max_interval':0.01,
'qos_image': 1,
'qos_camera_info': 1}]
remappings=[
('rgb/image', '/camera/color/image'),
('rgb/camera_info', '/camera/color/image/camera_info'),
('depth/image', '/camera/depth/image_raw')]
return LaunchDescription([
# Nodes to launch
Node(
package='rtabmap_odom', executable='rgbd_odometry', output='screen',
parameters=parameters,
remappings=remappings),
Node(
package='rtabmap_slam', executable='rtabmap', output='screen',
parameters=parameters,
remappings=remappings,
arguments=['-d']),
Node(
package='rtabmap_viz', executable='rtabmap_viz', output='screen',
parameters=parameters,
remappings=remappings),
])
After following the suggestions and digging around the errors that I was facing, I did the following changes and configurations:
Converted RGB and Depth compressed images into raw images. Currently the raw RGB image has bgr8
encoding and raw depth image has 32FC1
encoding. Regarding the QoS profile, after setting the profiles as shown in 'paramaters' section in the launch file above, i get my publisher profile as 'RELIABLE', rgbd_odometry node subscriber as 'BEST_EFFORT' and rtabmap node subscriber as 'RELIABLE'.
Also, checking through ros2 topic hz /topic
(for sync considerations) i can see that the frequency is almost same for raw RGB images and raw Depth images.
I have kept approx_sync : True
as per your suggestion and approx_sync_max_interval':0.01
due to a warning while running the node that suggested to reduce the max_interval due to sync issues (this was while running with approx_sync : True and there was no such message with approx_sync set to False).
Worth mentioning:
I have access to simulated IMU data which publishes on /imu
which i believe is subscribed by default by the VO node.
My environment is a simulated space environment so sometimes there might be no depth data when sensor faces the infinity of space with nothing else in sight.
I also have access to pointcloud data that i can use if needed for odometry. Though, not sure if i might run into similar problems while trying with icp_odometry
node.
Please let me know how can i do the VO and mapping.
Currently i don't see any error but there's no ouput at all in rtabmap-viz. The only warnings i see are:
[rtabmap_viz-3] libpng warning: iCCP: known incorrect sRGB profile
[rtabmap_viz-3] libpng warning: iCCP: known incorrect sRGB profile
[rtabmap_viz-3] libpng warning: iCCP: known incorrect sRGB profile
and
[rtabmap-2] [WARN] [1720876387.967998065] []: Messages of type 3 arrived out of order (will print only once)
[rtabmap-2] [WARN] [1720876387.971761013] []: Messages of type 1 arrived out of order (will print only once)
[rtabmap-2] [WARN] [1720876387.972760730] []: Messages of type 2 arrived out of order (will print only once)
got it working with rgbd sync, some frame corrections and parameter adjustments. Was so tedious due to lack of documentation, hope this gets resolved soon in future.
Hello, I'm trying to use this package to do VIO on a robot operating in simulated micro gravity environment. I plan to fuse IMU with VO output from Rtabmap through 'robot_localization' package and then use the filtered Odometry ouput for mapping. My simulated sensor gives compressed RGB and Depth image outputs. Could you please tell me how to set-up or launch the nodes for this? I'm sorry I'm a little lost as i couldn't find any specific documentation for this on ROS2 platform. My launch file looks like this which is based on the launch file given for ROS2 turtlebot3 RGBD demo.
Overall, I just need to know how to set it up for VO output and use it for mapping. Thanks in advance.