Open martinakos opened 5 years ago
I have created a resolve_relocalization_externally() function that sets the current frame pose, creates a new keyframe, CreateNewKeyFrame(), then attempts to track from local map, TrackLocalMap(), and set the tracking state accordingly.
This works well, if the current frame is a frame that has enough feature correspondences with the local map. I can see my newly inserted keyframe in the map and a second later this keyframe is automatically connected to the rest of the map and it's pose corrected (as a result of the local BA).
However, if the tracking is lost and I call my function, I insert the new keyframe in my externallly estimated pose and tracking can continue from there, but the keyframe is disconnected from the rest of the covisibility graph. This happens because there isn't enough feature matches between my inserted keyframe and the rest of the map. I imagine this makes sense as if there were enough matches between the current frame and any part of the map the relocalization would've succeeded on it's own, without me having to insert a keyframe with an externally estimated pose.
So if I want a single covisibility graph, rather than various disconnected ones, my function is not too useful if the current frame is not looking at an already explored area of the map.
Sometimes I see that my inserted keyframe looks at an area of the scene that had been previously observed, but from a totally different view point and as a result not enough feature matches are found. That's unfortunate, I though ORB feature were more invariant to viewpoint changes. A solution I could think for this case is warpping my current frame with the externally estimated pose to match the view point of other existing keyframes, then extract features from the warped frame and hope that now the feature matching with existing keyframes would be more sucessful. I think it's going to be too much work to do this though.
If anybody has some suggestion, please post. Otherwise I'lll close this issue in a few days as I think that the modification is not going to be worth for my use case.
Hi, I am interested in the work you have done, and I would appreciate it if you could show light on these questions:
What sensor do you use to get the external pose?
Could the work solve the scale problem in monocular SLAM?
Can I refer to your improvement in the ORB-SLAM2?
I would like to modify orbslam2 to use an external camera pose estimate to resolve relocalization. What would this modification involve?
I imagine I would need to insert a new keyframe with whatever features are found in the current frame and set the camera pose with my external pose. Anything else? Would I need to ask the mapper to attempt a loop closure? would this new keyframe survive keyframe culling if it's not connected to the rest of the map yet?