Ekumen-OS / beluga

A general implementation of Monte Carlo Localization (MCL) algorithms written in C++17, and a ROS package that can be used in ROS 1 and ROS 2.
https://ekumen-os.github.io/beluga/
Apache License 2.0
176 stars 13 forks source link

Investigate why the scan likelihood formula does not follow the model in Prob. Rob. #153

Open glpuga opened 1 year ago

glpuga commented 1 year ago

Description

Both QuickMCL and AMCL seem to have for a long time used a formula for aggregation of the likelihood p that does not match the model proposed in Probabilistic Robotics (Table 6.3).

The explanation seems to be only based on the empiric evidence that "it works". See

For the sake of equivalence, we currently implement this formula in Beluga too.

However, we should investigate how performance changes if the Prob.Rob. formula is used instead, and what the relative merits of each option are.

Definition of done

ivanpauno commented 1 year ago

It seems that multiplying probabilities assumes that the measurement of each beam is independent. But supposing that you have a "dynamic obstacle" (i.e. not part of the likelihood field), that would cause many beams to have a low value, and result in a pretty low multiplicative value.

Instead, when adding the "weights" of each beam, you're only improving how good is the "match". If you take a Nth power, a really good match weights more.

We can test it, but I feel that multiplying the weight of all beams will not result in a good result in the precense of dynamic obstacles. I agree though that the current formula used by "AMCL" and "Quick MCL" doesn't have any evidence (neither theorical or empiric), so doing this analysis seems to be worth it.


About the experiment, I think it's really important for it to include obstacles that are not in the likelihood map, so we can see compare how it performs in a more realistic way.

hidmic commented 1 year ago

that would cause many beams to have a low value, and result in a pretty low multiplicative value.

Hmm, I suspect there is a computational element to this. Working with smaller quantities across the bar isn't a problem unless you hit quantization limits. I wonder if a log-odds representation may be a principled yet computationally tractable solution. Thrun proposes it for bayesian filtering in general for that reason.

hidmic commented 1 year ago

Funnily enough, the likelihood_field_prob model in nav2_amcl voices the same concerns about dynamic obstacles and ~uses log-odds representations~ not quite log-odds, only a log transform, misread the code.

hidmic commented 7 months ago

I will eventually get back to this and try out Log-PF ideas. Money on a numerical issue with likelihood functions and floating point representations.