Open ymei opened 2 years ago
The matrix that will transform a point in the IMU frame to be in the camera frame would be written T_c1_b in ORB_SLAM3. Are you sure you're using the correct matrix? Have you tried using its inverse?
The matrix that will transform a point in the IMU frame to be in the camera frame would be written T_c1_b in ORB_SLAM3. Are you sure you're using the correct matrix? Have you tried using its inverse?
Yes, we've tried using its inverse, it still diverges wildly.
hi @ymei , I am having the same issue. I am using stereo-inertial slam, I did the cam-imu calibration using kalibr and put the resulted matrix into the yaml file, stereo works good but stereo-inertial don't.
Could you please tell me more about how to change the cam and IMU orientation ? I am using an external IMU with a stereo camera. I didn't changed the orientation and just did the kalibr. I guess this is the reason stereo-inertial is not working in my case.
Looking forward to hearing from you. Thanks
We encountered an interesting phenomenon that is dependent on the relative orientation between the Stereo camera and IMU. We have two ZED2i cameras, labeled
A
andB
here, fixed on a rigid chassis. The Stereo camera ofA
is used for vision, and the IMU ofB
is used for inertial. In the default coordinate system of ZED2i in ROS, the IMU coordinate system is two 90deg rotations away from the camera's coordinate system, andKalibr
recovers this using calibration data well, and the distance in the x-direction is very close to reality.When this matrix
T_b_c1
is used in Stereo-Inertial SLAM, the tracking fails. The failure is particularly pronounced when the vehicle makes a turn, where the SLAM reconstructed track diverges drastically from the ground truth. (Stereo alone, without IMU, reproduces the track reasonably OK)However, if we add code to manually rotate IMU data such that the IMU coordinate system is oriented the same way as that of the camera, the Stereo-Inertial SLAM works well, with better accuracy than Stereo vision alone. In this case, the
Kalibr
generated matrix becomesThe two cases used the same recorded dataset. The question is, why would manually aligning the coordinate systems between the camera and the IMU yield a good result while having the needed rotation encoded in
T_b_c1
gives a bad one?