hello-robot / stretch_ros2

ROS 2 packages for the Stretch mobile manipulators from Hello Robot Inc.
https://docs.hello-robot.com/0.2/stretch-tutorials/ros2/
51 stars 19 forks source link

Noisy rtabmap recording #98

Open xuq1mu opened 4 months ago

xuq1mu commented 4 months ago

Hi, I used the stretch_rtabmap package to record a map, but the recorded result was quite noisy. When I use the recorded rtabmap to navigate the stretch robot, the robot is often stuck somewhere because of the recorded noise, specifically, the robot is located on an empty plane, but the noisy map makes the robot think obstacles surround it.

If I understand it correctly, the recorded 3D point cloud is down-projected into the 2D cost map for navigation, and the noisy point cloud leads to non-existent obstacles in the 2D cost map.

Is there any filter that could be used during recording? Did you have the same issue with the noisy map?

Thanks in advance!

hello-binit commented 4 months ago

Hi @xuq1mu, I haven't tried the stretch_rtabmap package before. I'd be happy to give it a shot and see if I'm seeing the same noise you're seeing in the map. Would you explain the steps you took and commands you executed to capture the map? Additionally, would you include the robot's serial number (should be printed on a sticker in the trunk of the robot).

xuq1mu commented 4 months ago

Thanks for your reply, I followed the Readme file and used the command ros2 launch stretch_rtabmap visual_mapping.launch.py teleop_type:=joystick to capture the map. Do you think that could be a problem with the resolution of the RealSense camera?

hello-binit commented 4 months ago

Thanks @xuq1mu. I'll have to try it to know for sure. Could you also send me your robot's serial number (should be printed on a sticker in the trunk of the robot).

xuq1mu commented 3 months ago

Hi, after checking the robot's entire body, I still could not find the sticker with the serial number. Could you specify it a little bit more?

xuq1mu commented 3 months ago

Hi, after trying, I've found that switching to low resolution and pointing the camera towards the ground can be helpful. By the way, during the navigation with rtabmap, "I found that the robot is still using only 2D lidar to localize itself. Shouldn't it use the camera information to do localization and collision avoidance?