Closed prashanthr05 closed 2 years ago
Example C++ code for ORB Tracker,
One of the state-of-the-art VINS method known as VINS-MONO implements a point feature tracker. It's an open-source code. I modified their code a bit into a PointsTracker
class which is to be used by ImageProcessor
class repsonsible for managing the tracking of points and lines.
The PointTracker
class uses optical flow based KLT tracker to track features between two images, while rejecting outliers using RANSAC based on an essential matrix (VINS-MONO uses RANSAC based on fundamental matrix. Former uses 5-point RANSAC while the latter uses a 8-point RANSAC.). Feature detection is done using goodFeaturesToTrack
detector, while detecting new features only in the regions of the image where there are no features already detected. We keep track of the features with longest frame counts, to be used for building factors.
An initial implementation was added in this commit https://github.com/prashanthr05/kindyn-vio/commit/1c810cc5ae0a4932caa592316ccd118aa19787e2.
We may close this issue, given the preliminary implementation of the PointsTracker
class to be used by configuring the ImageProcessor
class.
A sample test case available in PointsTrackerTest.
The PointsTracker
class was improved in https://github.com/dic-iit/kindyn-vio/commit/bb6907128e3a172f8b49169b05b8a6b3c306a5f5 by considering bi-directional optical flow check for tracking features across consecutive frames. The video below shows the features tracked across multiple frames, where each marker is described as P<feature-id>, <frame-track-count>
, where P
signifies a point feature . For example P3, 10
means the point feature was assigned with the ID 3 and has been tracked for 10 consecutive frames. The color of the marker changes from blue to red depending on how long it has been tracked.
https://user-images.githubusercontent.com/6506093/127321716-7c632284-7efd-4071-99de-fa81e2ccfd29.mp4
This example is implemented in PointsTrackerTest
class.
It can be noticed that the features close to the Aruco marker are tracked as expected. While a very significant feature, which is the corner on the laptop is not being tracked across images. This might be due to the reason that since it's a single point it get's rejected as an outlier in the RANSAC based elimination which uses the essential matrix as the model fit. Indirectly, this point might not be explicable within the rigid body transformation between the two camera frames, as much as those dense points over the aruco marker.
I had already implemented reference code in C++/Matlab. I need to find it and make it into proper classes.