Open sooyoungmoon opened 3 weeks ago
Hi @sooyoungmoon Converting the SLAM guide for use with ROS2 is not usually done by RealSense users and so there is not a previous example available to compare your own work to in order to diagnose problems, unfortunately.
RealSense ROS2 users have typically used a combination of slam_toolbox and _depthimage_to_laserscan to implement a SLAM system, as described at https://github.com/IntelRealSense/realsense-ros/issues/3046#issuecomment-1999092540
slam_toolbox: https://github.com/SteveMacenski/slam_toolbox
depthimage_to_laserscan: https://github.com/ros-perception/depthimage_to_laserscan/tree/ros2
Hi @sooyoungmoon Converting the SLAM guide for use with ROS2 is not usually done by RealSense users and so there is not a previous example available to compare your own work to in order to diagnose problems, unfortunately.
RealSense ROS2 users have typically used a combination of slam_toolbox and _depthimage_to_laserscan to implement a SLAM system, as described at #3046 (comment)
slam_toolbox: https://github.com/SteveMacenski/slam_toolbox
depthimage_to_laserscan: https://github.com/ros-perception/depthimage_to_laserscan/tree/ros2
Hi @MartyG-RealSense, thank you for your quick response to my question. I'll try a combination of slam_toolbox and depthimage_to_laserscan and share the result. :)
You are very welcome, @sooyoungmoon - I look forward to your next report. Good luck!
@MartyG-RealSense
Hello, for other researchers & developers, I hope you had a good day.
I'm struggling with this 'SLAM with D435i' issue to get any map data shown on rviz2.
Instead of trying SLAM-with-D435i-wiki, I installed and tested slam_toolbox and depthimage_to_laserscan.
In short, I found that the rgb->laserscan data conversion worked well, but the map data was not published by slam_toolbox node. I will check similar issues in this and related repos, but I would be appreciated if you give me any hint to
The test procedure is as follows:
1) (Executed the realsense2 camera node) $ ros2 launch realsense2_camera rs_launch.py (with remapping config)
2) (Executed the package to convert rgb to scan) $ ros2 launch depthimage_to_laserscan depthimage_to_laserscan-launch.py
3) $ ros2 launch slam_toolbox online_async_launch.py
4) $ rviz2
5) (On rviz2) I added '/map' topic -> "No map received" message occurred
6) $ ros2 run tf2_tools view_frames -> there was no 'map' frame on the tf2 tree diagram.
7) I executed a static_tranform_publisher as below: $ ros2 run tf2_ros static_transform_publisher --x 0 --y 0 --z 0 --qx 0 --qy 0 --qz 0 --qw 1 --frame-id map --child-frame-id camera_link
8) $ ros2 run tf2_tools view_frames -> 'map' frame was added on the tf2 tree diagram tf2_tree(slam_toolbox and depthimage_to_laserscan).pdf
9) When I added '/scan' , I could see the laser scan image on rviz2.
Still, I couldn't see any map data on rviz2. $ ros2 topic echo /map -> returned nothing
10) It seemed that the scan data was being published well $ ros2 topic echo /scan -> scan data messages were printed on the screen
Hi @sooyoungmoon Do you experience the message [INFO] [1655749660.201251796] [slam_toolbox]: Message Filter dropping message: frame 'camera_depth_frame'
like in the slam_toolbox case at https://github.com/IntelRealSense/realsense-ros/issues/2387 where a RealSense user was not receiving map data?
At the end of that case, another RealSense user suggested at https://github.com/IntelRealSense/realsense-ros/issues/2387#issuecomment-1667319135 to publish the odometry value to the /odom
topic
Hi @sooyoungmoon Do you experience the message
[INFO] [1655749660.201251796] [slam_toolbox]: Message Filter dropping message: frame 'camera_depth_frame'
like in the slam_toolbox case at #2387 where a RealSense user was not receiving map data?At the end of that case, another RealSense user suggested at #2387 (comment) to publish the odometry value to the
/odom
topic
@MartyG-RealSense You are right! That's the same message except that in my case the reason is 'discarding-message-because-the-queue-is-full'. Based on the user's suggestion you mentioned and other resources such as a slam_toolbox repo, I believe that I figured it out how to solve this problem. (To create a new node to publish transform data (odom_frame -> base_frame) given imu data. I think I can solve this issue in a few days. Thank you!
You are very welcome, @sooyoungmoon - I'm pleased that I could be of help. Good luck!
@MartyG-RealSense
You are very welcome, @sooyoungmoon - I'm pleased that I could be of help. Good luck!
I created a new ROS2 node which calculates the camera pose from imu data and then publishes transform data (frame_id: map, child_frame_id: odom). After that, the map topic data was published by /slam_toolbox node. But there is a new issue.
Q. Is there a better way to compute camera pose compared to my approach (to compute camera pose by using imu data only)?
At first, I tried to use the orientation within the Imu message to estimate camera pose. But it was too unstable, so I skipped this approach after a few trials.
After that, I am trying to compute the position and orientation of my camera using the linear acceleration and angular velocity values.
Until now, it is not quite successful because the camera position experiences drift, and the estimated position become farther and farther away from the real position.
Especially, the absolute value for linear acceleration (y axis) is strangely high (please refer to the topic data shown below)
$ ros2 topic echo /imu/data ... header: stamp: sec: 1730819791 nanosec: 632761600 frame_id: camera_imu_optical_frame orientation: x: -0.49896361969468156 y: 0.48194959538813315 z: -0.5038171864430749 w: 0.5147117021896622 orientation_covariance:
header: stamp: sec: 1730819791 nanosec: 637771520 frame_id: camera_imu_optical_frame orientation: x: -0.4990473420685022 y: 0.4815993639143508 z: -0.5041232176980946 w: 0.5146587067459129 orientation_covariance:
header: stamp: sec: 1730819791 nanosec: 642781952 frame_id: camera_imu_optical_frame orientation: x: -0.49911623571819935 y: 0.48125715574947686 z: -0.5044428763567813 w: 0.5145987930177406 orientation_covariance:
header: stamp: sec: 1730819791 nanosec: 647792384 frame_id: camera_imu_optical_frame orientation: x: -0.49915365280729845 y: 0.4815220580607172 z: -0.50418338333173 w: 0.5145689987380432 orientation_covariance:
header: stamp: sec: 1730819791 nanosec: 652804608 frame_id: camera_imu_optical_frame orientation: x: -0.49919502532603227 y: 0.4817768005011726 z: -0.5039204720266867 w: 0.5145479560361664 orientation_covariance:
RealSense 400 Series cameras do not have the in-built ability to calculate the relative position of the camera device. The only approach that I know of (other than the ROS2 one that you created yourself) is a ROS1 user at the link below who used rtabmap to study a recorded pointcloud map file and obtain the relative position from that map.
Hi @sooyoungmoon Do you require further assistance with this case, please? Thanks!
@MartyG-RealSense Hello, I managed to draw a map using slam_toolbox and depthimage_to_laserscan. This was possible by providing transform data (odom to camera_link). I have one more question. When I move my D435i camera to generate maps, the realsense2_camera node often disconnects the connection to the camera and tries to re-connect to it. Could you tell me under which condition does it happen? Thanks. ![Uploading 20241116_115249.jpg…]()
I'm pleased to hear that you succeeded in drawing a map!
In regard to your new question, Intel's D435i SLAM guide for ROS1 provides the following guidance
The built-in IMU can only keep track for a very short time. Moving or turning too quickly will break the sequence of successful point cloud matches and will result in the system losing track. It could happen that the system will recover immediately if stopped moving but if not, the longer the time passed since the break, the farther away it will drift from the correct position. The odds for recovery get very slim, very quickly
SLAM tends to work better in scenes that are not too sparse and have objects in them for the camera to analyze.
Issue Description
Hello, I'm following SLAM-with-D435i-wiki to test SLAM using Intel Realsense2 D435i on ROS2 (Humble) environment.
After modifying a launch file and a few parameters, I was able to execute all the necessary nodes. I can see the 'camera_link' frame moves as I change the position and direction of my D435i camera. However, soon after I move my camera, I can see on rviz the 'camera_link' frame 'floats away' from the real pose of the camera and the following error messages are printed on the screen.
In addition to that, the /rtabmap/odom topic includes abnormal values (covariance matrix has very big (i.e. 9999) diagonal values) as soon as the errors occur .
I'm curious about the reason why those error occur simultaneously at multiple ROS2 nodes including: /rtabmap/rgbd_odometry, /rtabmap/rtabmap, /ukf_filter_node, and /rviz2.
Could you give me some hint to solve the issue? Thank you!!
For your reference, I attach the parameters I changed and the launch file I used (opensource_tracking_ros2.launch.py)
-------------------- opensource_tracking_ros2.launch.py ------------------------------
import os
from ament_index_python import get_package_share_directory
from launch import LaunchContext, LaunchDescription from launch.actions import DeclareLaunchArgument from launch.actions import IncludeLaunchDescription from launch.actions import SetLaunchConfiguration from launch.actions import GroupAction
from launch.conditions import IfCondition, UnlessCondition from launch.launch_description_sources import PythonLaunchDescriptionSource from launch.substitutions import LaunchConfiguration from launch.substitutions import TextSubstitution from launch.substitutions import ThisLaunchFileDir from launch_ros.actions import Node from launch_ros.actions import PushRosNamespace
def generate_launch_description():
------------------------------------------------ end ----------------------------------------------