raulmur / ORB_SLAM2

Real-Time SLAM for Monocular, Stereo and RGB-D Cameras, with Loop Detection and Relocalization Capabilities
Other
9.41k stars 4.69k forks source link

How can I use only selected features?? #374

Open hyunhoJeon opened 7 years ago

hyunhoJeon commented 7 years ago

I would like to use only selected features and their depth values ​​through post-processing rather than features obtained through the frame constructor. I mean...I want to estimate the motion with only the selected features through my algorithm. To do this, I need to provide selected features to orbslam2, or select features that are used in orbslam2 on the motion estimation process. Has anyone tried this process? Does anyone have any idea?

AlejandroSilvestri commented 7 years ago

Hi @hyunhoJeon

orb-slam2 uses orb features. Orb is (sort of) FAST + BRIEF descriptors.

Frame pose is obtained by matched frame features' descriptors with 3D map point descriptors.

You can remove the features you don't want to process after FAST in ORBextractor class.

hyunhoJeon commented 7 years ago

Thank you very much! @AlejandroSilvestri And I have a few more questions. In fact, I am running ORBSLAM on Windows, and debug mode does not work. So I have a bit of trouble analyzing the code.

OK.. so my fisrt question is, I wonder how to remove feature. For example, if there is a remove function, or if i change the depth information to 0. What should I do?

And second is, I already know that orb slam use g2o for motion estimation. Does g2o get results from 3D-3D matching? 3D-2D? I think 3D-2D matching should be done with pnp, but I have found that it is only used for relocalization. So I'm confused. Because I've only experienced 3d-2d matching. so...How do g2o estimate motion? 3D-3D? 3D-2D?

AlejandroSilvestri commented 7 years ago

Well, in Frame every undistorted keypoint is added to the vector mvKeysUn, and its descriptors to mDescriptors, at the same index.

You can remove some of them. But if you want to keep then for everything but pose optimization, I believe it would be better to filter them in that step.

g2o is a solver, well fitted for bundle adjustment, and big graph based linear systems. Orb-slam2 uses it for many different purposes.

Optimizer::PoseOptimization uses g2o to give the frame pose, from 2D-3D matching (actual frame keypoints vs. 3D map points). The reason why PnP is not used here is performance. PoseOptimization is lighter.

Why bother using PnP in relocalization? PoseOptimization requires a good initial pose estimation. Orb-slam2 has it in tracking thanks to the motion model, but don't have it during relocalization, because tracking is lost, the system doesn't have a clue where it is located.

So, perhaps a good point to filter features is in PoseOptimization, in the loop loading g2o::EdgeSE3ProjectXYZOnlyPose from keypoints and mappoints. You don't erase features. only prevents undesirable ones being used in pose optimization.

hyunhoJeon commented 7 years ago

Thank you for your kindness. @AlejandroSilvestri

Thanks to your answers, I was able to understand the parts that were difficult to understand:)

IndShiv commented 1 year ago

@hyunhoJeon @AlejandroSilvestri Thank you for this thread it has helped me understand a few things. I wanted to know whether you could clarify something I am trying to do that is aligned to the above:

I am a beginner when it comes to SLAM in general but Ive wanted to ask where could I remove keypoints in the RGBD method of ORBSLAM3.

Ive tried attempting to do it in the Frame class - ie mCurrentFrame within the GrabImageRGBD method but I seem to run into alot of issues. I am basically using a filtering alogrithm I developed based on a object detection model and I want to remove keypoints within a certain area.

Before I run my algorithm I simply tried reassigning mCurrent.mvkeys to a new std::vector cv::Keypoint called New_KP which is fundamentally just the first 800 keypoints of mCurrent.mvkeys but I end up with alot of errors. I would appreciate any guidance!