ov2slam / ov2slam

OV²SLAM is a Fully Online and Versatile Visual SLAM for Real-Time Applications
GNU General Public License v3.0
578 stars 128 forks source link

IMU initial guess about orientation #26

Closed TheRealKaufmann closed 3 years ago

TheRealKaufmann commented 3 years ago

Hi there,

I really adore this project!! So far it runs fine. Since I want to run it on a quadcopter next, I need the right orientations in 3D space. My plan is to use the IMU just for an initial guess about pitch/roll orientation and heading. Where in the code (which files) can I implement this? Where does ov2slam initialize this?

The problem, is that the trajectory is not right aligned with the x y grid. Also, the camera is tilted down a bit.

Screenshot from 2021-06-12 17-57-44

Thanks! Best regards, Tom

ferreram commented 3 years ago

Hi @TheRealKaufmann , Glad to hear that this project is useful to you. Initialization is explicitly performed in visual_front_end.cpp for monocular setups. In stereo configuration, the initialization is implicitly performed when creating the 1st keyframe as this triggers the triangulation of stereo correspondences (in mapper.cpp).

For your use case, I would say that the easiest way for you would be to initialize the orientation of the camera while standing still (in order for you to use a strong prior from the fact that you know the drone is not moving at all which will help you in removing the IMU biases). For the initialization function, I think you could call it in ov2slam.cpp, maybe right after the call to "bool is_kfreq = pvisualfrontend->visualTracking(img_left, time);". In Visual SLAM, the "world" frame (or frame of reference if you prefer) is defined by the pose of the 1st keyframe (usually set as the Identity pose if no other information is available). So, you need to align the orientation of the 1st keyframe and propagate this alignment to other existing keyframes (and to the 3D map points) if any have already been created.

Hope it helps!

TheRealKaufmann commented 3 years ago

Hi thank you very much for the detailed response!

So all I have to do is to write an initialization function in ov2slam.cpp that transform the first keyframe in mapper.cpp and propagate this?

ferreram commented 3 years ago

Yes, you can see every camera poses Tc0ci as the transformation from camera i to camera 0 and every 3D map points j as c0_Xj, i.e. they are all expressed in the frame of reference defined by the first created keyframe whose pose is Tc0c0 (= Identity). So, once you have estimated you rotation alignment Rwc0 from your IMU, you can align everything by applying this rotation: TwcI = [Rwc0 Rc0ci | Rwc0 tc0ci] and w_Xj = Rwc0 * c0_Xj. In terms of code, you have to go through every keyframes in mappkfs and every 3D map points in mapplms and apply this rotation to update their frame of reference.

TheRealKaufmann commented 3 years ago

Thanks you very much! I will close this issue for now.