Closed democheng closed 6 years ago
@simonlynen and @magehrig are the authors of that component and could answer this question.
I have not written that code (and there is no interface to use it atm) but I can give you a hint on what I would use. First, we distinguish between supervised and unsupervised dimensionality-reduction methods. Usually, getting labeled data (such as matching/non-matching in case of LRT) can be cumbersome. Hence, I suggest using unsupervised ML methods if performance is not paramount.
Supervised methods:
Unsupervised methods:
PCA * **
Kernel PCA *
Loquercio et al.: Thorough analysis of different projection methods including Neural Network projection, PCA and KPCA. TLDR: KPCA slightly better performance than PCA. NN even better but supervised and harder to implement. ** Lynen et al.: Mentions that PCA performs comparably to LRT. So there is not really a reason for using LRT at all because PCA is much easier to use.
To summarize, I would just use PCA and forget LRT.
@schneith thank you very much @magehrig thank you very much, I will implement the projection matrix according to your suggestions
Hi all.
I am surprised that someone has already posted this question. I am confused by the code in ComputeProjectionMatrix()
too. At first I assumed the code was implementing the equations in the paper "keypoint design and evaluation for place recognition in 2D lidar maps", but I failed to figure out the equivalence between the code and the equations.
After a basic skim in the first paper @magehrig mentioned (Loquercio et al.), I still can't see direct connections between the paper and the code. And it's a little overwhelming for me now to read all the citations and references in paper. Currently I don't have access to the 2nd paper mentioned above.
I know it may not be necessary, but I am still wondering the math underlying the code. Does anyone has some direct materials about it? Great thanks.
@NewThinker-Jiwey This function is not used at the moment (dead code). The projection is currently done with projection matrices that are loaded from disk. I was merely referring to methods that could be used to train/learn a projection matrix that then could potentially replace the ones loaded from disk.
I do not have time at the moment to have a detailed look at that code myself. This code was most likely written by Simon Lynen a long time ago. If you really want to understand the code you might want to try contacting him by email (if you do, it would be helpful to post the answer here).
my question is about the function in the file algorithm/loopclosure/descriptor-projection/src/build-projection-matrix.cc void ComputeProjectionMatrix(const Eigen::MatrixXf& cov_matches, const Eigen::MatrixXf& cov_non_matches, Eigen::MatrixXf* A); the paper "maplab: an open framework for research in visual-inertial mapping and localization" uses inverted multi-index[36] in the "loop closure/localization" module, "[36]get out of my lab: large-scale, real-time visual-inertial localization" refers to "keypoint design and evaluation for place recognition in 2D lidar maps" to do the descriptor projection, the function ComputeProjectionMatrix seems different to equations above, could you please explain the math? or give me some papers to understand it?