Open glpuga opened 1 year ago
It seems that multiplying probabilities assumes that the measurement of each beam is independent. But supposing that you have a "dynamic obstacle" (i.e. not part of the likelihood field), that would cause many beams to have a low value, and result in a pretty low multiplicative value.
Instead, when adding the "weights" of each beam, you're only improving how good is the "match". If you take a Nth power, a really good match weights more.
We can test it, but I feel that multiplying the weight of all beams will not result in a good result in the precense of dynamic obstacles. I agree though that the current formula used by "AMCL" and "Quick MCL" doesn't have any evidence (neither theorical or empiric), so doing this analysis seems to be worth it.
About the experiment, I think it's really important for it to include obstacles that are not in the likelihood map, so we can see compare how it performs in a more realistic way.
that would cause many beams to have a low value, and result in a pretty low multiplicative value.
Hmm, I suspect there is a computational element to this. Working with smaller quantities across the bar isn't a problem unless you hit quantization limits. I wonder if a log-odds representation may be a principled yet computationally tractable solution. Thrun proposes it for bayesian filtering in general for that reason.
Funnily enough, the likelihood_field_prob
model in nav2_amcl
voices the same concerns about dynamic obstacles and ~uses log-odds representations~ not quite log-odds, only a log transform, misread the code.
I will eventually get back to this and try out Log-PF ideas. Money on a numerical issue with likelihood functions and floating point representations.
I will eventually get back to this and try out Log-PF ideas. Money on a numerical issue with likelihood functions and floating point representations.
FYI I finally got around and gave this a shot. Specifically, using log likelihoods as weights and normalizing in log space to avoid the numerical issues. So far I can tell that the Jacobi algorithm they propose works as intended, numerically speaking. I haven't tried it on Beluga nor have I run any micro benchmarks on it (algorithm is O(N)
on the number of particles) yet, but it may be a nice contender.
Description
Both QuickMCL and AMCL seem to have for a long time used a formula for aggregation of the likelihood
p
that does not match the model proposed in Probabilistic Robotics (Table 6.3).The explanation seems to be only based on the empiric evidence that "it works". See
For the sake of equivalence, we currently implement this formula in Beluga too.
However, we should investigate how performance changes if the Prob.Rob. formula is used instead, and what the relative merits of each option are.
Definition of done