introlab / rtabmap_ros

RTAB-Map's ROS package.
http://wiki.ros.org/rtabmap_ros
BSD 3-Clause "New" or "Revised" License
974 stars 557 forks source link

Instability of trajectory points in the process of two-sensor modeling #348

Open TomLuosy opened 5 years ago

TomLuosy commented 5 years ago

1 2 Hi,I encountered problems with ROS modeling.I modeled it with two sensors rotating.It can be seen that the trajectory point is very stable without rotation of the sensor.But when I started rotating the sensors, the trajectory points began to float, even if they were just rotating in one place. I want to know what determines the trajectory point when modeling with two sensors. How can I improve this situation?Thanks.

TomLuosy commented 5 years ago

I read a lot about slam today.I think the problem at present is that the translation matrix of the sensor is not set up properly.I remember that before I improved the rotation matrix of the sensor by modifying the parameters in the launch file,which is called R in slam.So I want to know what files can be modified in ROS to change the translation matrix of the sensor, which is T. I look forward to your reply. Thank you.

matlabbe commented 5 years ago

What I see is that when you rotate the sensor, the camera lacks of visual features, creating very large drifts. ultimately it could get lost (red screen). What are the sensors used? You can use static_transform_publisher to create a transform between them. What is your tf tree (rosrun tf view_frames)?

TomLuosy commented 5 years ago

What I see is that when you rotate the sensor, the camera lacks of visual features, creating very large drifts. ultimately it could get lost (red screen). What are the sensors used? You can use static_transform_publisher to create a transform between them. What is your tf tree (rosrun tf view_frames)?

选区_015 Hi,thanks your reply! I found the location set in the launch file.The setting of the last four parameters is very good, which also explains that the modeling effect is very good without rotation. How do I set the first three parameters of translation to match the motion of the sensor?

matlabbe commented 5 years ago

You should try to see what /camera1_link and /camera2_link frames are referring to. For example, if the camera link is the base of the camera and the cameras are 10 cm apart, centred on y axis from base_link frame, one transform would be 0 -5 0 and the other 0 5 0. Note that you can also set the angle in euler format (yaw pitch roll) instead of quaternion for convenience. Do you have a photo of your setup?

TomLuosy commented 5 years ago

I spent a few days finding the cause of the problem. The sensor I am using is intel's d435. In its official documentation I found that when using dual sensor modeling, the origin of the reference coordinate system is set on the left sensor. As shown, in the modeling, I want the origin between the two sensors. What I did before was only to make the images on both sides parallel, but in reality they are more like a parallelogram. This also explains the very good results when modeling without rotating the sensor. So the problem I need to solve now becomes how to set the origin of the reference coordinate system between the two sensors. 1 22

matlabbe commented 5 years ago

I would put a base_link frame in the middle base of the robot, then from there, a frame for each camera: base_link-> camera1_link and base_link -> camera2_link. You can show in RVIZ the TF tree to see if the frames / point clouds match the current setup.