Closed Rtakaha closed 2 years ago
Hi,
As far as I understand your question, you want to estimate the pose (position and orientation) of your platform. If you wish to do so, I do not recommend using solely the IMU data (i.e., dead reckoning), as it drifts quickly. Usually, people use visual inertial odometry (VIO) to estimate the pose. Hope it helps!
Regards, Mohammad
@mhyoosefian
Hi, Thank you very much for the comments. I studied a bit after asking this question, and noticed that both ARCore and ARKit use VIO/VI-SLAM to get camera pose, not just IMU. This fact and your answer convinced me that I should not use IMU data alone to estimate the pose.
Thank you! I am closing the issue.
I found what I had been looking for·Thanks a lot
You should be able to run kalibr_calibrate_imu_camera
with the --export-poses
option to export the poses used during optimization. Hope this helps.
https://github.com/ethz-asl/kalibr/blob/master/aslam_offline_calibration/kalibr/python/kalibr_calibrate_imu_camera#L239-L244
How can I convert raw IMU data to camera pose matrix?
Using kalibr, I have successfully calibrated camera intrinsics and imu_to_camera transformation matrix following Multiple camera calibration and Camera IMU calibration.
Now I wonder how I can convert raw IMU data(gyro, accelerometer, 6D) to camera pose matrix(rotation matrix and translation vector, 4x4). Is it also possible by using some methods in kalibr? or do I have to work on another library?
I assume there is a way to convert raw IMU data to 4x4 IMU pose matrix, and then I can apply imu_to_camera transformation matrix to get the camera pose matrix.
I collected data with an android phone using marslogger.
I am new to this field and this could be a pretty damn question, but I really need your help. Thank you!