Closed gvasserm closed 1 month ago
If the variance is low, it means the new location doesn't relate strongly with any locations, so virtual place likelihood would be high.
From Section III-C of this paper:
where the score is relative to μ on σ ratio. If L(St = −1|Lt) is high (i.e., Lt is not similar to a particular location in WM, as σ < μ), then Lt is more likely to be a new location
This approach works well for BOW likelihood (when "mean" is usually very low):
However, when the "mean" is high, that doesn't work as well (like in this experiment using global descriptor instead of BOW):
So we added another approach in https://github.com/introlab/rtabmap/pull/1272 (with parameter Rtabmap/VirtualPlaceLikelihoodRatio
), which is StdDev / (Max-Mean)
instead of Mean/StdDev
. Related discussion: https://github.com/introlab/rtabmap/issues/1105.
If you have another approach to compute that term, we can add another "if" here: https://github.com/introlab/rtabmap/blob/17c2142c98b80648a272198a144b12b790abbfd6/corelib/src/Rtabmap.cpp#L5291-L5325
Thank you for your reply @matlabbe. In a simple use case, as observed in the sample, it generally works as expected. For correct loop closure, the likelihood mean/std is less than 1, while for a new place, the mean/std is greater than 1. However, in the data I'm analyzing in localization mode, the likelihood mean/std for the correct candidate is approximately 1.5.
I'm using the default incremental BoW dictionary. I'll try different normalization methods as you suggested.
It is hard to say without seeing the full sequence, but based on those images, I would expect a likelihood not very high (in particular for binary features). You may try with features like SIFT instead (note that the loop closure benchmark of rtabmap has been done using SURF).
In your example, is the best hypothesis the right hypothesis? If so, you could decrease Rtabmap/LoopThr in localization mode.
I've been looking into the global loop closure mechanism in RTAB-Map, and I noticed that its activation seems to depend on the probability of recognizing a new virtual place. If the probability is low enough (below a certain threshold), it suggests that the robot is in a previously visited location, which triggers global loop closure. In the adjustLikelihood() function, the score for recognizing a new virtual place is computed based on the formula mean/stdDev + 1.0f. Ideally, when there's a strong candidate for a match, the variance should be low, as the likelihood distribution forms a sharp peak around the candidate, leading to a high mean/stdDev value. However, this results in a higher score for the new virtual place, which contradicts the logic that a strong candidate should result in a lower recognition score for the new virtual place. What are your thoughts on this?