Open jlblancoc opened 6 years ago
I think to start with the particle filter algorithms could be a little bit more generalized and the particle filter algorithms need to be separated into their own classes.
Here's a start of some of my ideas: https://github.com/jolting/mrpt/tree/refactorPF
Great! I'll merge in an experimental branch to continue testing there, ok?
Here it is: https://github.com/MRPT/mrpt/tree/dev-PF-refactor
Ok. I broke the python bindings. I can fix them later if you like those changes.
No problem for the bindings. I'll take a closer look at the changes (the entire set of classes for PF is really intertwined and a bit intricate, honestly...) and come back. Will not be today or tomorrow! ;-)
Hmm... after a first "in depth" (not enough, tough) look at the changes, I have some comments:
executeOn()
... is this on purpose?I see and like the idea of using a templ. param for selecting the PF algorithm. However, since the algorithms are "general", I think it would be better to have all their "trait classes" together into one header (probably CParticleFilter.h itself?). Enums as t.param was another idea, but it's actually more flexible to use classes, so it could be extended in the future, or by user code, etc. right?
The main problem right now is that most code seems to still be calling this signature, which seems not to be implemented (?):
I couldn't build it for testing, so perhaps I'm wrong. Anyhow, allowing such a signature without PF_ALGORITHM
is mandatory if algorithms are to be selected dynamically via a configuration file, etc.
I imagine writing the impl. of this version of executeOn<>
with a switch
inside, calling executeOn<PFCapable,PF_Algorithm>()
for each PF_Algorithm... or does exist any alternative?
I probably should have called those two separate things
executeOn
The one template version needs to be pushed down into the individual classes and called something else
executeOn
I realized too late that template instantiation is probably overkill for this. I was jumbling things around trying to get things to compile and I went a little crazy.
https://github.com/MRPT/mrpt/blob/7c68726716dcfa0f333693046e5d51c71c367dbc/libs/slam/src/slam/CMonteCarloLocalization2D.cpp#L243 https://github.com/MRPT/mrpt/blob/7c68726716dcfa0f333693046e5d51c71c367dbc/libs/slam/src/slam/CMonteCarloLocalization3D.cpp#L198
I think also about a factory pattern to fix this.
The two template version seems powerful though. You can use your own version of the algorithm. If you made processActionObservation templated as well you could have as many variations of particle filters as you want and the algorithms would be fairly loosely coupled. Hopefully this can improve reuse. https://github.com/MRPT/mrpt/blob/c7ed37ff72972e48fbb253d7410e8d9740effc13/libs/slam/src/slam/CMetricMapBuilderRBPF.cpp#L108
(Follow-up from mrpt-users mailing list here)
Things to bear in mind:
There are 4 different PF algorithms in MRPT. See https://www.mrpt.org/tutorials/programming/statistics-and-bayes-filtering/particle_filter_algorithms/
Read also first: https://www.mrpt.org/tutorials/programming/statistics-and-bayes-filtering/particle_filters/
What parts should be subtle to multi-threading? I think that evaluation of observation likelihood is probably the easiest spot. There are other costly areas in the "optimal sampling" algorithm, but at least when a dynamic number of particles is enabled, we must synchronize among threads.
overall: possible patches should go to "master" (new features will not get into the mrpt-1.5 branch). Also, the feature must be disabled or enabled by the user from an INI file entry.
Depending on whether we are doing SLAM or localization, different parts of the PF algorithms are subject to a gain from parallelization.
From the top of my head, I think that one possible place to implement parallelization is at each specific implementation of these 4 virtual methods of CParticleFilterCapable:
Let's use this thread to further discuss possible strategies on how to address this feature.