Closed Genozen closed 8 months ago
If you have stereo, then you shouldn't have too bad drift while stationary, but it is still possible. When you are stationary it is tough to triangulate features, so you might want to check if feature triangulation is passing and maybe loosen the threshold if your calibration is poor and the triangulation is failing for stereo.
In the monocular case, zero velocity update needs to be used to handle this stationary case since there is no parallax to triangulate features.
Re your 1, using a projected pattern will not work. A key assumption of the environmental features is that they are static, in the case that you are moving and tracking features on a projected pattern which is also moving with you this is invalidated. I am not sure if there is much to do for a white wall besides using a wider FOV camera which is less likely to see a pure white wall without the floor / other parts of a room.
Re your 3, this is likely a function of your shutter speed and motion blur and IMU intrinsic / noise calibration. Sometimes the filter can diverge as the velocity estimate becomes poor and features start to be rejected and thus the system can not recover in these crazy cases. You could try to re-create the class, or re-run the ov_init
initializer to try to re-create the system in the case of lost features or super high velocity but I have not tried myself.
If you shake the camera really fast, features will be difficult to track over the long term. So you should use more msckf features. I usually increase num_pts appropriately and reduce the number of slam features from 50 to 25. This can increase the number of msckf features for robustness.
Thank you @borongyuan @goldbattle for the detailed response, I will take your suggestions into consideration!
Perhaps a backup sensor such as wheel odometry or optical flow could be used to improve speed estimation? The repeatability of the ADNS 3080 sensor on asphalt looks good, it picks up quickly enough to work at high speeds, I think this can also solve the problem with planar scenes for a short time. I also wanted to add that there is an opportunity for modernization, for example, if you add not LEDs but an infrared laser as illumination to the camera, then perhaps it will be possible to make a laser mouse effect, when mouse can work even on glass surface where not a single detector will find any key features.
Hi, how can I tune the triangulation thresholds in OpenVINS?
Hi, I have been running D435i with OpenVins using the onboard stereo infrared camera, with IR projector disabled. They work great while there's features in the environment.
There are several cases where it starts drifting, and I would like to ask for recommendations:
For 1, I'm thinking if I can combine with RGBD, then it should help create textures/features on white walls. The problem is if I enable the IR projector, it interferes with the OpenVins that's using the infrared stereo pair for VIO...
For 2, I'm not really sure... do I need to re-calibrate and have the camera face vertically up to see the calibration board?
For 3, is there anyway to do large motion rejection?
Lastly, with all of these drifts, if I then followed by turning the camera to a known area with lots of features, most of the time it looks like it will re-localized, or stabilized in another position (not perfectly re-localized), but sometimes it continues to drift despite high features detected, do I then need to reset the covariance in my odometry? (if so how do I "reset" properly?)