raulmur / ORB_SLAM2

Real-Time SLAM for Monocular, Stereo and RGB-D Cameras, with Loop Detection and Relocalization Capabilities
Other
9.3k stars 4.69k forks source link

Vocabulary sparsity problem #119

Open joyousrabbit opened 8 years ago

joyousrabbit commented 8 years ago

Hello everyone,

I'm thinking these days a problem and hope your help: Is the vocabulary too big for practice?

As you see, the vocabulary has 1 million items. However every frame, on average has only about 1000 feature points. I can't image any statistic sense by projecting from 1000 to 1 million. Most similar features will miss matching just cause of this huge sparsity (I didn't take experiment, just some feeling).

Besides, compare 2 vectors both of 1 million is not that fast.

So, my question is, why not keep the vocabulary little (about 1000 words)? Then even a brute force search on 10000 keyframes takes only 1000*10000 (just 10 times than comparing 2 vectors of 1 million).

Any idea is welcome. Thanks!

rikpires commented 8 years ago

DBoW2 method is not an exhaustive searching method. It's somehow like a k-d tree searching. The searching efficiency is pretty good. The searching part takes about only 5ms for average and less than 40ms for maximum in the author's experiment. The author provides a huge data base so that everyone can use the same data base instead of training his own for each scene. And of course, if you want to speed up or your case is special or simple, you might train your own database.

joyousrabbit commented 8 years ago

@rikpires Thank you for comment. What will happen if we keep, in DBoW2, only 1 level but with 1000 leaves?