KumarRobotics / msckf_vio

Robust Stereo Visual Inertial Odometry for Fast Autonomous Flight
Other
1.66k stars 592 forks source link

the role of the feature position during estimation? #104

Closed taogashi closed 3 years ago

taogashi commented 3 years ago

image image As pointed out in the paper "Robust Stereo Visual Inertial Odometry for Fast Autonomous Flight", by projecting to the null space, the actual point coordinate (x, y, z) vanished from the measurement function. Does it mean that an arbitrary position of feature won't have any impact on the later estimation? (my math sucks) In some case the state drift very fast, camera position is obviously wrong, the feature.initializePosition depending on it also won't be right. If we can get a correct feature position (by solely using the first stereo pair), will it help to converge?

ke-sun commented 3 years ago

As pointed out in the paper "Robust Stereo Visual Inertial Odometry for Fast Autonomous Flight", by projecting to the null space, the actual point coordinate (x, y, z) vanished from the measurement function. Does it mean that an arbitrary position of feature won't have any impact on the later estimation? (my math sucks)

The position of the features shows up in computing measurement Jacobians. Therefore, accurate feature position estimate is still necessary to ensure correct measurement update.

In some case the state drift very fast, camera position is obviously wrong, the feature.initializePosition depending on it also won't be right. If we can get a correct feature position (by solely using the first stereo pair), will it help to converge?

It is often the case that feature positions estimated with a stereo pair are not sufficiently accurate because of the limited baseline. In cases of divergence, it may be helpful to just initialize the feature positions with a single stereo pair. But it would make more sense to figure out the reason that causes the initial divergence (like inaccurate calibration, too few features).