raulmur / ORB_SLAM2

Real-Time SLAM for Monocular, Stereo and RGB-D Cameras, with Loop Detection and Relocalization Capabilities
Other
9.31k stars 4.69k forks source link

Improve Feature Matching in ORB SLAM 2? #630

Open lawchekun opened 6 years ago

lawchekun commented 6 years ago

Hi,

I'm wondering if there are any parameters I can tune to improve the feature matching for ORB SLAM 2?

Currently, I've set the number of features detected to 6000 and kept the default thresholds for FAST.

ORB Extractor: Number of features per image ORBextractor.nFeatures: 6000

ORB Extractor: Scale factor between levels in the scale pyramid
ORBextractor.scaleFactor: 1.2

ORB Extractor: Number of levels in the scale pyramid
ORBextractor.nLevels: 8

ORB Extractor: Fast threshold Image is divided in a grid. At each cell FAST are extracted imposing a minimum response. Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST You can lower these values if your images have low contrast
ORBextractor.iniThFAST: 20 ORBextractor.minThFAST: 7

I'm encountering problems with large rotations whereby the number of features detected is high, however the features matched are very low, which leads to tracking to be lost.

Also I'm wondering if it makes sense to use BF matching for previous and current frame for matching of keypoints using the SearchForInitialization function in the MonocularInitialization function in Tracking.cc as described

https://github.com/raulmur/ORB_SLAM2/issues/512

1) TrackWithMotionModel 2) TrackReferenceKeyFrame 3) SearchForInitialization (BF Matching of keypoints)

That is when 2) TrackReferenceKeyFrame fails to return enough matches, I use SearchForInitialization to find matches using BF?

As currently, it seems like I'm able to detect enough features, but the feature matching does not yield enough matches, leading to tracking to be lost.

Some debugging messages I have added give me the following info

Number of Features in Frame: 6000 TrackWithMotionModel Number of Matches: 24 Number of Map Matches: 1 TrackReferenceKeyFrame Number of matches via BoW: 33 virtual int g2o::SparseOptimizer::optimize(int, bool): 0 vertices to optimize, maybe forgot to call initializeOptimization() virtual int g2o::SparseOptimizer::optimize(int, bool): 0 vertices to optimize, maybe forgot to call initializeOptimization() virtual int g2o::SparseOptimizer::optimize(int, bool): 0 vertices to optimize, maybe forgot to call initializeOptimization() Number of nmatchesMap: 0 No. of matches with Monocular Initialization Test: 154

My question is that on feature matching performance wise, would the BF method (with SearchForInitialization) be better than TrackWithMotionModel and TrackReferenceKeyFrame? Aside from the fact that it is more computationally intensive as it does not early discard pairs.

It could also be that I have issues on the implementation side which lead me to believe that SearchForInitialization yields better matching performance.

My pipeline is that I'm processing images (2064 x 1544) from a live camera feed with a global shutter at 30 FPS.

low feature count 2 Example Image when tracking is about to be lost, blue rectangles are detected keypoints, green are matched keypoints. (This is from a screenshot so the quality rather low...)

Please advise!

AlejandroSilvestri commented 6 years ago

@lawchekun , hi

Large rotations are indeed a big problem, inherent to monocular slam. Pure rotation without translation take away the chance to triangulate new 3D map points.

Visual SLAM consists in matching features from the current frame with 3D point from the map, and not with features from another frame nor keyframe.

May be your problem is not about matching features, but, because of rotation, your image is losing map points, so there is nothing to match against!

lawchekun commented 6 years ago

@AlejandroSilvestri , Hi,

Thanks for the reply! Much appreciated!

I am actually rotating about the robot axis which is not aligned with the axis camera center (i.e: Not a 100% pure rotation about the camera axis as there is some translation). Would that still cause performance issues?

I see! I might have mixed up the feature matching part with VO (Visual Odometry) which matches features from the current frame to the previous frame etc with Visual SLAM.

Would you happen to have any suggestion on parameters I can tune to have more mappoints and hence better matching in this scenario then?

Please advise!

AlejandroSilvestri commented 6 years ago

@lawchekun , unfortunately I don't have any suggestion on parameters, but I can say disaligning camera and robot axis improves the chance of new keyframe, extending the map and avoiding losing track.