PRBonn / kinematic-icp

A LiDAR odometry pipeline for wheeled mobile robots
https://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/kissteam2025icra.pdf
MIT License
197 stars 22 forks source link

Kinematic ICP Threshold #15

Closed tizianoGuadagnino closed 1 week ago

tizianoGuadagnino commented 3 weeks ago

Motivation

The adaptive threshold of KISS-ICP does not translate well to Kinematic, as in the latter case, we typically have an excellent initial guess from the robot odometry. This implies that, in principle, we can be more conservative with the threshold (lower value), making the system more robust in the data association phase.

Concept of what I have done

What I tried to do in this PR, is to design a new correspondence threshold scheme, CorrespondenceThreshold that takes into account two things:

  1. Our (not more as) loved VoxelHashMap has a specific point resolution, given by voxel_size and max_point_per_voxel. We should consider this resolution into our threshold, as it should be impossible for the system to go below this value. Why is that? If we look at the VoxelHashMap, we will add a point to the map if it is at a minimum distance from all the other points and is at least equal to the map resolution $d$. That means that our corresponding point search matches not simply the closest point but the closest measured point between a ball of radius $d$. I can add more details if needed. This fact is considered by the class's map_discretization_error_ parameter.
  2. We still need to consider how much our robot odometry deviates from the LiDAR odometry estimate; this is done in a very similar fashion as KISS. However, by removing the min_motion_threshold and initial_threshold as now, we can expect our motion prediction to be much more accurate since the beginning. Plus, we have a minimum value given by default by map_discretization_error_.

If $\sigma_{\text{map}}$ is map_discretization_error_ and $\sigma_{\text{odom}}$ is the average deviation between robot and LiDAR odometry in point space (as in KISS), our new threshold $\tau$ will be:

$$\tau = 3 (\sigma{map} + \sigma{odom})$$

Add configuration parameters for this module

Finally, I decided to give the user the option to fix their threshold if they are willing to tune the parameter for their scenario. This is done with the new parameters use_adaptive_threshold, which enables or disables the adaptation of the threshold, and fixed_threshold, which sets the threshold value in case use_adaptive_threshold=false.

tizianoGuadagnino commented 2 weeks ago

Thanks for pushing this. After discussing it for a while, we should definitely address it. Using the same threshold always felt wrong.

I just found a few minor things.

Before merging, we should thoroughly test the impact on performance. Do you already know how it looks for our test cases and also quantitatively for the sequences we recorded?

Besides that, we can also discuss how to actually compose that two kinds of error terms. Right now, we sum them, and this, of course, works, but how about, for example, taking the max of both:

τ = 3 max ( σ m a p , σ o d o m )

Just as an idea, this guarantees that we will compensate at least for the map discretization error and increase it if our odometry is far off.

Also, I'm not sure if the multiplication by 3 is still needed. Have you tried removing it?

Lastly, one comment on the map discretization error. If you, for example, assume three map points on a plane with a map resolution of d apart, the maximum discretization error of a point in the center will be

σ m a p = d 3

It is based on the idea that the three map points form an equilateral triangle. But maybe that's also too much, and we can just go with the map resolution to include some additional safety margin.

I tried different things before coming up with this solution. I didn't directly try the max, but it was very close. If the error is not above the map_discretization_error_,, I do not update the threshold at all, meaning that the threshold can never decrease below map_discretization_error_. This resulted in a too small threshold, and the estimates were broken at the first turn. Your version will be a 3* that, so maybe that works; we can test it out, but I can tell you that removing the 3x or reducing the values too much will degrade performances according to what I saw until now. As you said, we should use our data to find out what is the better approach, I guess many things will work, but we can at least compare trajectories.

tizianoGuadagnino commented 1 week ago

After a long discussion, we decided to move forward and just merge this, as any evaluation seems inconclusive due to the lack of data; we still want to investigate this through #20.

As a concrete result, using this proposed version or what @benemer proposes (which is $\tau = 6\cdot\sigma_{\text{map}}$) gives identical results, which, of course, tell us that, based on the few data that we have available, that there are probably 20000 formulas that we can come up with that they are going to more or less work. For now, we will use this and allow the user to manually tune this value if needed. Hopefully this cover 99% of the use cases or, which we wish for the project, we will have enough data available to have a final word on this (unlikely but it is worth to hope)