rpng / MINS

An efficient and robust multisensor-aided inertial navigation system with online calibration that is capable of fusing IMU, camera, LiDAR, GPS/GNSS, and wheel sensors. Use cases: VINS/VIO, GPS-INS, LINS/LIO, multi-sensor fusion for localization and mapping (SLAM). This repository also provides multi-sensor simulation and data.
GNU General Public License v3.0
481 stars 80 forks source link

Questions about LiDAR noise and consistency #12

Closed lewisjiang closed 1 year ago

lewisjiang commented 1 year ago

Hi, thank you for opensouring this comprehensive sensor fusion system. I have some questions regarding the LiDARs.

In the LiDAR odometry part, you use a direct scan-submap registration as residual, which is similar to FASTLIO2. I am quite curious about the map_noise parameter used in simulation. If I understand correctly, this parameter is used to whiten the point-plane residual of the neighbor points found on the local map. However, the neighbor points can only be selected if they all pass a plane sanity check, i.e., their distance to the fitted plane should be smaller than plane_max_p2pd, and this is a hard constraint. However, the map_noise (related to the uncertainty we think it has) is 5x of plane_max_p2pd (related to the actually uncertainty), so it seems like a manual dilation of point-plane residual uncertainty, which should(?) lead to a smaller NEES (conservative).

So, my questions are the following:

  1. Why is the map_noise parameter much larger in simulation than in real experiments, given that they use the same sanity check threshold plane_max_p2pd, and the simulation has a much smaller LiDAR point measurement noise raw_noise?
  2. It seems that the three simulation scenes mentioned in the paper are used for three different purposes, and UD small is used for consistency verification. I'm wondering how consistent the system is on longer or more complex datasets (e.g., "C" shaped corridor)? Because UD small lasts only 60 seconds, and the whole scene is visible to the LiDARs throughout the experiment (i.e., local map never decays).
  3. How consistent is the system under different LiDAR noise levels? For example, raw_noise at level 2cm, 3cm, or 10cm as used in the real world experiments. How to change other parameters accordingly when this raw_noise changes?

The table below is excerpted from the configuration files of MINS:

KAIST / KAIST LC KAIST L Simulation
raw_downsample_size 2.0 1.0 0.3
raw_noise 0.1 0.1 0.01
map_downsample_size 0.5 0.3 0.3
map_noise 0.1 0.1 0.5
plane_max_p2pd 0.1 0.1 0.1
map_decay_time 30 9999 120
map_decay_dist 30 100 100

I hope you can clear up my questions. Thank you!

WoosikLee2510 commented 1 year ago

Hi @lewisjiang. Thank you for having an interest and asking questions. To answer your questions, I need to first explain what is the map_noise. The map_noise is the noise of the map point in map coordinate ({}^Mp_i) which is in local map coordinate, because MINS only maintains the local map. Therefore, unlike global maps (map point {}^Gp_i), the magnitude of the error can be bounded because the error does not follow the global pose estimation error. However, still there are many sources of errors for map registration (raw_noise, pose error, calibration error, ICP error, etc) so the actual error can vary depending on the scenario - and map_noise is roughly representing their overall magnitude sum. In many simulations, I found that the map can actually have a larger error even though it passes the local plane threshold - thus I set a large value to map_noise.

Now answer your questions:

  1. In simulation, I was testing many things and I think the LiDAR parameters are what I tested last. As you mentioned, I set map_noise to a conservative number so that LiDAR does not hurt overall localization with other sensors.
  2. The 'UD_Small' LiDAR test was to show how it works on an ideal case. it will likely stay consistent for the long run in the same dataset or scenario. However, for a more complex dataset, such as the system revisits the same place after a long run, this consistency may not hold. There are a couple of reasons for this: 1) the map is likely to get blurry due to accumulated estimation, and 2) MINS does not support LiDAR loop closure. If the system does not revisit the same place (or the local map does not hold points there) then overall consistency may remain fine.
  3. The noise level should be good with what actual noise of the sensor + small alpha. On the KAIST dataset, I used 0.1 because the noise level is also used to balance other sensors. If you do not have this balancing problem and your vehicle motion is slow, you can use small numbers such as 2~5cm. However, if the actual sensor noise is large, e.g., 10cm, then the LiDAR point-to-plane model will likely not hold.
lewisjiang commented 1 year ago

Hi @WoosikLee2510 . Thank you for your answers, they really helped with my confusion. And I'm afraid that I have to bother you with some more follow up questions.

  1. I understand that moderately dilating some noise parameters can help balance information from different sensors. What I don't understand is why in simulation, map_noise is much larger than in real world. As you mentioned, map_noise summarizes many sources of errors, but in simulation, the condition is more controlled and ideal, especially because the environment consists of perfect planes, and the lidar point noise strictly conforms with raw_noise. Also, by looking at the published lidar map in rviz, the points' deviation is way smaller than map_noise, after being affected by different sources of noise. I also changed the map_noise parameter to smaller values, and the simulation reports similar or smaller avg RMSE.
  2. I'm more interested in the consistency when the surrounding evolves over time. Namely, the robot keeps exploring new places and forgeting old places outside current sight, in an odometry sense as you mentioned. So I'm wondering if you have done such experiments and what are the results? I think this scenario is closer to real-world operating conditions.
  3. I think this question is about ablation study on simulation noise. I want to know what will the NEES be like if we just give the simulator a larger raw_noise? And how to tune the other parameters accordingly if we want the system to be consistent? BALM2 has done similar analysis (Fig.7), though in a batch optimization sense.
WoosikLee2510 commented 1 year ago
  1. The map_noise value in simulation is set conservatively to ensure overall consistency and balance with other sensors. While in visualization (rviz) the point cloud looks good (walls are walls, lines are lines), map-based LiDAR update often makes the system inconsistent due to the aforementioned reasons and more. Actually, most of the map-based systems (either camera or lidar) do not guarantee "theoretical" consistency. Most of them show that the accuracy improvement but consistency is not evaluated. As someone who is focusing on filters, their covariance, and their consistency, I was able to find experimentally that the map uncertainty should be set way larger than our guess - which again, can be scenario-dependent (also vary different simulation scenarios). I guess another point you are asking is why the map_noise is set smaller in the real world (KAIST). I remember this was also part of the weight-balancing issue with other sensors that too large map_noise made LiDAR update very less effective (also KAIST has much fewer valid measurement points due to the sensor installation, roughly 1/4 ~ 1/10 of simulation). So I decreased the value to give some more weight - especially compared to cameras.

  2. Yes, when I was testing on (in simulation) almost pure odometry that only keeps near a sub-map point cloud, the system consistency was able to be maintained. But I would like to note that the test environment was closed (indoor-like) and always getting sufficient matches between the map and a new scan. In the real world - such as KAIST which we reported in the paper, LiDAR worked as a source of odometry information (as no loop closure info is provided) and it seems the system was stable. The reason why I am using the term 'stable' is because we do not have true ground truth in the real world so we cannot evaluate true consistency.

  3. You have a good question and here is my thinking process: I think one of the most significant error sources is raw_noise. This is because MINS is leveraging the point-on-plane LiDAR measurement model. To use this, we basically have the following steps: 1)) pick a point from a new scan and find the neighboring points from the map (by distance, kd-tree). 2)) create a plane from the map points and compute the point-on-plane distance of the point There are many assumptions within this process, such as neighboring points of the map can be found by distance, they are actually on the same plane, and the plane computed represents the actual plane the points are on. If raw_noise is noisy, then not only does the measurement have higher covariance but also all of these assumptions can break. For example, if raw_noise has 1m noise (so the map will have 1m noise at least), then none of these assumptions will hold in general. I am not sure how BALM2 is able to address this issue (just scanned it) but such a measurement model break is not something that easily can be addressed by inflating other uncertainty parameters.

lewisjiang commented 1 year ago

@WoosikLee2510 I have to say your detailed explanation (and the paper itself) about theoretical modeling and practical engineering concerns is unprecedented in previous literature. Really nice work, thank you for answering my questions.