Open nahueespinosa opened 1 year ago
Hi there @nahueespinosa @hidmic! should we create a new sensor model type called LikelihoodFieldProb
as it is done in NAV2 AMCL? At first glance, it seems the "appropiate" thing although LikelihoodField
and LikelihoodFieldProb
have a lot of code in common. :thinking:
I am more prone to use the same LikelihoodField
model and add new paramateres, but I have to spent some time looking the code to see if that's possible. Or maybe an interface? WDYT? Suggestions are always welcome :sweat_smile:
@DPR00 From the looks of it, these are the inputs of the algorithm:
One of these two problems need to be solved here:
In my view, the best solution would be to decouple the likelihood field from the likelihood field sensor model, and create a new free-function that implements the beam skipping algorithm.
A bit of pseudo-code:
auto likelihood_field = LikelihoodField{map};
auto sensor_model = LikelihoodFieldModel{likelihood_field, ...};
...
for (auto&& [measurement, ...] : data) {
auto filtered_measurement = skip_unlikely_beams(particles, measurement, ...);
particles |= beluga::actions::propagate(...) |
beluga::actions::reweight(sensor_model(std::move(filtered_measurement)) |
...
}
As a side note, they don't seem to use the cubic formula to aggregate probabilities in this model, so we should look into that as well.
As a side note, they don't seem to use the cubic formula to aggregate probabilities in this model, so we should look into that as well.
They use log weights. I'd be inclined towards keeping the LikelihoodFieldModel for 2D in the same class, using the parametrization and abstractions discussed above, and deferring the weight discussion for #153. In my mind, we should eventually define a beluga::LogWeight
type, and have weight construction, aggregation, and normalization be bound to the weight type. Perhaps it's overkill and we should work all weights in the same space (linear, log, whatever), but leaving the door open sounds reasonable to me.
I wonder if, instead of implementing it, we should evaluate whether this actually makes a difference in performance. Is there empirical or bibliographic support for this feature? Does people use this?
The multi-parameter statistical parameter for the beam moel models unmapped obstacles to some extent. The likelihood model lacks this term because it "sees through".
At first sight, pre-selecting measurements that confirm the proposal distribution with no other model to support this sounds like selection bias to me.
I wonder if, instead of implementing it, we should evaluate whether this actually makes a difference in performance. Is there empirical or bibliographic support for this feature? Does people use this?
There's neither other than being a long standing feature in Nav2. We also won't know whether it makes a difference in performance until after we have tried it, but unlike most situations we could use Nav2 AMCL as the "reference" implementation for a quick validation.
At first sight, pre-selecting measurements that confirm the proposal distribution with no other model to support this sounds like selection bias to me.
It does come off as a hack to mitigate the numerical issues that motivate likelihood aggregation via sums of cubes. LikelihoodFieldProb
accumulates log weights but then normalizes in linear space so it's not entirely out of the woods. Theoretically speaking, I'm with you, it's all nonsense, but so is summing cubes and it works :man_shrugging:
All in all, in general and from a process perspective, I'd like to have a standard procedure for validating improvement hypothesis such as this. Landing https://github.com/Ekumen-OS/lambkin/pull/93 and #415 will get us one step closer to that.
There's neither other than being a long standing feature in Nav2.
FTR I unearthed https://github.com/ros-planning/navigation/pull/238. Those numbers they promised never saw the light of day, and yet it was merged, so I can only assume author and maintainer knew each other.
but so is summing cubes and it works
Indeed, but notice that we know the sum of cubes works because it's the default and therefore we can infer people have been using it for years with no complains, even if it lacks rigorous support.
That's not so clear for beam skipping; I can't really tell how many people actually use this, and the node defaults to not doing it, which maybe also speaks for how broadly relevant this feature is.
I think doing a nav2-vs-nav2 sanity check would be a nice thing to do before embarking on an implementation that might require changes to API to get to work.
we know the sum of cubes works because it's the default and therefore we can infer people have been using it for years with no complains
I'm with you in everything but the no complains
bit :eyes:
Anyways, this goes beyond this feature. We don't have a clear vetting procedure for feature requests, so we go pursue them all and defer that choice to the user. In a perfect world, I'd first rank them based on theoretical soundness, then enforce data-driven checks. I wouldn't want to stall development until we have that though.
I think doing a nav2-vs-nav2 sanity check would be a nice thing to do before embarking on an implementation that might require changes to API to get to work.
:+1: I fully agree with that. FYI @DPR00.
FTR I unearthed ros-planning/navigation#238.
Interesting. @mikeferguson by any chance do you remember any context about the beam skipping feature in https://github.com/ros-planning/navigation/pull/238 ?
That's a long time ago - but looking at the repo it came from, and who was involved, I think this was used in the Kuri robot? I'm guessing there was a side conversation, but I don't have any metrics to provide on whether this does much to improve performance.
@DPR00 the base skeleton benchmark is already up and working https://github.com/Ekumen-OS/lambkin/pull/116 , feel free to take over.
Notes:
Recommendations:
Description
Beam skipping is a heuristic implemented in the likelihood_field_model_prob from
nav2
that can potentially help with dynamic obstacles. The idea is to ignore the beams for which the majority of the particles do not match the map, this would prevent the correct particles from getting down weighted due to unexpected obstacles such as humans.To do that, the algorithm first goes through all the particles and calculates the percentage of particles for which a particular beam matches the map, if it is below a certain threshold, it ignores that beam when calculating the weight of each particle in a second step. If the number of beams to be skipped falls below a certain threshold, it will disable beam skipping for that iteration and integrate all beams.
Definition of done
Beam skipping feature implemented in Beluga.