TixiaoShan / LIO-SAM

LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping
BSD 3-Clause "New" or "Revised" License
3.47k stars 1.27k forks source link

What happens if one scan matching fails #116

Closed int-smart closed 4 years ago

int-smart commented 4 years ago

Hi Thanks for the nice algorithm. I had a question regarding the information exchanged between the two factor graphs. From what I understood one factor graph is estimating the imu bias which gives us an accurate estimate of the initial guess using imu for scan matching. The scan matching estimate along with gps and loop factors go into the second factor graph. Currently I am not using GPS and have loop closure disabled which essentially means that the second factor graph is just the scan matching estimates of the pose graph nodes.

Thanks for your work and help.

TixiaoShan commented 4 years ago

My question is if due to some reason one scan matching result is quite off from the ground truth (something that we can see in degraded environments), will the map become inconsistent? Yes. If a scan matching fails, imu preintegration will get a bad constraint. As a result, the estimated bias will deviate from its true value. Eventually, the scan matching initial guess, which comes from IMU, will be bad too.

I am observing in my data that one bad scan matching estimate (lidar_odometry) causes the imu_odometry to jump a lot in the next timestep and causes inconsistent map. What can be the possible reason for this and is there any way to fix this? The possible reason is mentioned above. One way to fix this is to introduce some failure detection mechanism. For example, tune the parameters here.

int-smart commented 4 years ago

Thanks for the clarification. I had one more questions for you. I am working on an indoor dataset which appears like this: image

The algorithm works good in some portions of the dataset as can be seen from this image: git However I notice jumps in the datasets at some places. My best guess at the moment is that the scan to map matching fails at those jumps (one of the issues mentions that LIO-SAM does not perform well indoors). Also I am using an imu with 50 Hz frequency which could be a problem. Let me know if you have any insights on this. I have attached the video for this if that helps. https://drive.google.com/file/d/12E3-gJaaABTTfeO1R5nTj-V-CHMTgiKb/view?usp=sharing

TixiaoShan commented 4 years ago

The first jump doesn't seem to be a scan matching failure. Can you print out the initial guess in mapOptimization to see if it is correct? It seems that the initial guess jumped around yaw.

Pallav1299 commented 4 years ago

I am also facing a similar scan-matching failure issue for my setup. I used scan-matching2

Can we evaluate the performance of scan-matching in LIO-SAM?

int-smart commented 4 years ago

The first jump doesn't seem to be a scan matching failure. Can you print out the initial guess in mapOptimization to see if it is correct? It seems that the initial guess jumped around yaw.

I checked the topics published (basically /odometry/imu and /lio_sam/mapping/odometry) to check what is the reason for the jump. I assume that the /odometry/imu acts as the initial guess for the scan to map matching stage. If thats not exactly true, I can print the values as well. In the topics, I noticed that the lidar odometry published a jump earlier than the imu odometry as you can see in these pictures: Screenshot from 2020-09-24 18-14-35

Screenshot from 2020-09-24 17-45-50

The vector.x, vector.y, vector.z above correspond to roll, pitch and yaw for the /odometry/imu_euler and /odometry/lidar_euler which are derived from the orientation of /odometry/imu and /lio_sam/mapping/odometry topics. In the figure we can see that the lidar odometry yaw jumps at 1582750900.388618 from 0.125150 to 0.495046 while the yaw published from imu preintegration odometry was still 0.193271 at 1582750900.71225319. The full rosbag is attached here if that helps: https://drive.google.com/file/d/1XEZgs8KWalqEE6x_peGBNIDQqGqaWdLp/view?usp=sharing

TixiaoShan commented 4 years ago

@int-smart Thanks for sharing the bag. Can you provide the yaml parameters for your setup? I will take a closer look.

int-smart commented 4 years ago

The params.yaml is here: https://drive.google.com/file/d/1ALj1LxkM1sUqPfkDyfcefHQejnlrYPCf/view?usp=sharing

TixiaoShan commented 4 years ago

@int-smart I checked your bag file. There is no point cloud and IMU data in it.

int-smart commented 4 years ago

@TixiaoShan Can I get your email address for sharing the data?

TixiaoShan commented 4 years ago

@int-smart Here is my email.

TixiaoShan commented 4 years ago

@int-smart I checked your new bag with the point cloud data. It seems that your point cloud is not correct. VIewing from the center of the lidar, the point cloud should show you N-line of points. For example, VLP-16 gives 16 lines of points.

Kazam__00001

Your point cloud looks like is corrupted when viewing from the center of lidar:

Kazam__00002

So the imageProjection.cpp won't work properly with the wrong lidar data.

int-smart commented 4 years ago

The reason why the pointcloud does not contain straight lines is that we are combining two pointclouds that are located at an angle with each other. The raw pointclouds from these two velodynes are transformed to base link frame and merged into one pointcloud which you can see above.

However, why is such an organization of pointcloud necessary?

Also, I feel this can be one of the reasons. However, I dont feel this is the only reason. I have since tried to use just one pointcloud from one of the velodynes (I did check that the pointcloud data is organized in this case) and I still see these jumps. I have sent the data to you if you need it.

stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Zahra-Farajzadeh commented 2 months ago

@int-smart I checked your new bag with the point cloud data. It seems that your point cloud is not correct. VIewing from the center of the lidar, the point cloud should show you N-line of points. For example, VLP-16 gives 16 lines of points.

Kazam__00001

Your point cloud looks like is corrupted when viewing from the center of lidar:

Kazam__00002

So the imageProjection.cpp won't work properly with the wrong lidar data.

Hello, how did you plot these figures?