introlab / rtabmap

RTAB-Map library and standalone application
https://introlab.github.io/rtabmap
Other
2.84k stars 785 forks source link

Lidar Odometry for iOS devices #1026

Open Genozen opened 1 year ago

Genozen commented 1 year ago

Hello, I am interested in trying to get the values of the Lidar odometry component in the iOS Rtabmap (Iphone, Ipad).

My understanding is that the Rtabmap SLAM stack takes the VIO from ARKit + Lidar Odometry to do SLAM.

My goals are:

  1. quantify how good the Lidar Odometry estimates it is without the VIO turned on
  2. Being able to turn off VIO from ARKit, adapt another Open Sourced algorithm and fused it with the Lidar Odom
  3. Turn on Lidar Odom only, when in pure dark (The ARKit portion currently prevents the app from continue running if camera is occluded or low featureless surface)

Referencing from the Rtabmap paper : https://introlab.3it.usherbrooke.ca/mediawiki-introlab/images/7/7a/Labbe18JFR_preprint.pdf image

Where can I find the code to "turn off" the VIO portion and get the Lidar Odom portion?

Right now my approach is to by pass the VIO by running the Rtabmap in the dark. To avoid "Avoid featureless surface" and "Camera occluded view" messages, which stops the app, I simply gave fake boolean values:

accept = true self.globalStatus = "Normal"

image

matlabbe commented 1 year ago

rtabmap ios doesn't do lidar odometry. Even if you bypass the check and forward lidar/image data without ARKit pose to rtabmap library, rtabmap will complain that there is no valid pose. If you want to replicate this example on iOS, you would need to build libpointmatcher and add it as dependency, then in cpp part, instantiate an OdometryF2M object using similar parameters that the example (Reg/Strategy=1 for lidar-only odom), and update OdometryF2M for each callback from Swift code.

Kishore-1297 commented 6 months ago

@matlabbe Hi, I am running into similar issues of "Avoid featureless surface" and "Camera occluded view" when trying to scan low-lit environments. In addition to receiving these msgs from ARkit on the UI screen, the odometry estimation drifts to far-off points for longer periods (~ 5 - 10 seconds) of camera occlusion /partial camera occlusion. Is this because of a poor dead reckoning from ARkit? or is there something that can be done on the RTABMap to handle these odometry issues? Thanks a lot for your time in this regard.

matlabbe commented 6 months ago

Is this because of a poor dead reckoning from ARkit?

Yes, we use directly ARKit odometry without any refinement on RTAB-Map side (we just do loop closures). If the camera is occluded (or it is dark), ARKit may rely only on IMU integration for some seconds (till it completely stop), which can be good sometimes, but can be bad other times.

It is possible to recover a scan and ignore these drifts afterwards, but it requires some manual steps to be done with rtabmap-databaseViewer to remove the bad links. I recovered some scans doing so.

Kishore-1297 commented 6 months ago

@matlabbe Thank you so much for your reply. Since ARKit relies on IMU readings during the state of insufficient features for pose estimation, Is it possible to improve the dead reckoning performance of tracking by using IMU calibration, Pre Integration techniques or better IMU propagation methods and hence improve the tracking in RTABMAP? Also, is it feasible to integrate VIO information from MSCKF based estimators like VINS -Mobile etc into RTABMAP for better performance?

matlabbe commented 5 months ago

We have some 3rd party odometry approaches that have been integrated in rtabmap, though some of them are not maintained anymore (original repo) making difficult for us to support over time (like compiling along latest version of other dependencies often fail). While in theory we could use any of them on mobile, I never tried it because ARKit is already pretty good in comparison to open source approaches (and far more easier to support).

For IMU calibration or dead-reckoning accuracy, I think ARKit is as good as you can get.