rpng / MINS

An efficient and robust multisensor-aided inertial navigation system with online calibration that is capable of fusing IMU, camera, LiDAR, GPS/GNSS, and wheel sensors. Use cases: VINS/VIO, GPS-INS, LINS/LIO, multi-sensor fusion for localization and mapping (SLAM). This repository also provides multi-sensor simulation and data.
GNU General Public License v3.0
427 stars 74 forks source link

Question about the feature representation #23

Closed se7oluti0n closed 4 months ago

se7oluti0n commented 6 months ago

Dear MINS developer,

First of all, thanks for sharing your awesome work. According the report Visual-Inertial Odometry on Resource-Constrained Systems, compare between different feature representation, the performance of the AHP and IDP parameterizations is significantly better than that of the XYZ parameterization.

Currently, MINS only support GLOBAL_3D or GLOBAL_FULL_INVERSE_DEPTH, which I support it is XYZ parameterization in the paper. so the question is why MINS only support Global representation? Is it possible to use all feature representation (from Open-Vins) to integrate to MINS and improve the tracking stability? Thanks in advance.

WoosikLee2510 commented 4 months ago

Hi, I am sorry for my delayed reply. Anchored feature representation can provide better performance, but it requires a solid camera pose (clone) to represent it. OpenVINS, for example, supports the anchored representation of features because it carries IMU poses at camera time. However, MINS carries IMU poses at an arbitrary time that makes the feature representation more inconsistent especially when you have inaccurate extrinsics and timeoffset calibration between the camera and the IMU. For these reasons, I decided not to support AHP in MINS. For IDP, yes - it might help. If you can make a pull request that would be great, otherwise, I will try to find time to implement it.