hku-mars / ImMesh

ImMesh: An Immediate LiDAR Localization and Meshing Framework
614 stars 55 forks source link

Why do you use r3live++ for estimating the camera’s poses in application-2? #8

Closed Neo-cyber-hubb closed 1 year ago

Neo-cyber-hubb commented 1 year ago

It is an excellent work. I have read your paper ImMesh_v1.pdf. I have a question. ImMesh has its own localization module for estimating the lidar's poses. It could get the camera's poses via the lidar's poses and the extrinsic parameters between the camera and the lidar. So, why do you use r3live++ for estimating the camera’s poses in application-2? Is it because the lidar and camera is not time synchronization or other reasons? Looking forward to your reply.

ziv-lin commented 1 year ago

Regarding your question, you are correct that ImMesh has its own localization module for estimating the lidar's poses.

However, In application-2, we used r3live++ for estimating the camera's poses since: 1) In most real-world applications, the lidar and camera may not be synchronized in time, and might not be calibrated well. 2) For the time-synchronized sensor set, the camera needs to take time to expose an image properly. But the timestamp is given by the start (or end) of the frame, leading to an unknown time-offset error.

By considering these aspects, we can use the vio subsystem of r3live++ for online calibrating both the spatial and temporal extrinsic.

Neo-cyber-hubb commented 1 year ago

Oh~I get it. Thanks for your reply!