rpng / open_vins

An open source platform for visual-inertial navigation research.
https://docs.openvins.com
GNU General Public License v3.0
2.07k stars 619 forks source link

Question regarding the OpenVINS coordinate system #277

Closed jianxiapyh closed 1 year ago

jianxiapyh commented 1 year ago

Hi

This issue is an extension of #61 and #49. Based on the suggestion provided in these two issues, we performed a coordinate transformation between OpenVins' global frame of reference coordinate system (x forward,y left, z up) to OpenGL's coordinate system (x right, y up, and z backward) by applying the following transformation. gl_pose.x = -ov_pose.y; gl_pose.y = ov_pose.z; gl_pose.z = -ov_pose.x; gl_orientation.w = ov_orientation.w; gl_orientation.x = -ov_orientation.y; gl_orientation.y = ov_orientation.z; gl_orientation.z = -ov_orientation.x;

This transformation has worked for us for both the EuRoc MAV dataset (IMU uses x - up, y - right, z - forward) and the ZED Mini cameras (IMU uses x forward,y left, z up). Because our transformation worked on two imu's with two different coordinate systems, we were under the impression that OpenVins would always publish the poses in the global frame of reference. Thus, we only need the transformation between OpenVins' global frame coordinate system to OpenGL's coordinate frame system.

Recently, we have been adding support to use Intel RealSense (D435/D455) to our platform, and Intel Realsense uses a different coordinate system for its IMU (x right, y down, and z forward). However, our previous transformation does not work this time. In specific, camera movement and orientation are not reflected correctly.

So we wonder if we have misunderstood the OpenVins system, especially regarding the coordinate system of the IMU's sensor frame and OpenVins' global reference frame. Suppose two different imu sensors are mounted differently. Does OpenVins publish poses in two separate frames? If so, do we need a separate transformation to transfer from OpenVins to OpenGL? If this is true, I am confused about why our transformation worked for the two earlier cases (EuRoC and ZED Mini).

goldbattle commented 1 year ago

To convert from right hand to left hand, I believe just the xyz components need to be changed (might be wrong here).

As for the difference once you change to a new sensor, this is expected as each sensor has a different IMU sensor location. The pose is the pose of the IMU sensor in the global frame, thus, you will need to transform that pose into whatever frame you desire (for example a camera or some other rigid frame).

jianxiapyh commented 1 year ago

Hi,

Thanks for getting back to me. Sorry for the late reply, but based on your explanation, we can see why we need separate transformations for Zed Mini and Intel RealSense. However, what confuses us is that Zed Mini and EuRoC Mav also have different imu frames, yet previously the same transformation for Zed Mini works for EuRoC.

In addition, we also believe that changing the XYZ components is sufficient. However, when we applied our transformation (Intel RealSense to OpenGL), that consisted of the following transformation.

gl_pose.x = realsense_pose.x; gl_pose.y = realsense_pose.z; gl_pose.z = -realsense_pose.x; gl_orientation.w = realsense_orientation.w; gl_orientation.x = realsense_orientation.x; gl_orientation.y = realsense_orientation.z; gl_orientation.z = -realsense_orientation.y;

We notice that the Evo trajectory plot shows small variations in the trajectories below. Do you have any insight on what's causing the discrepancy? image

This is a trajectory plot before(pose_ori), and after(pose_timewarp) our RealSense transformation. The transformation seems to work at the beginning (pink rectangles), as the pitch and the yaw indicate they are the reverse of each other (both have the same amount of rotation). However, if you start looking at the red rectangles, you see that the while yaw goes up more than 80 degrees for pose_ori in the first block, but the corresponding transformation trajectory pose_timewarp only goes up around 30. A similar trend also shows for block two. In this case, the yaw for pose_ori goes down 60 degrees, whereas only 20 degrees for pose_timewarp. Do you have any insight into why this is happening?

goldbattle commented 1 year ago

This is likely due to your orientation re-ordering. Generally, you should just multiply xyz by -1 if you need to flip the rotation (e.g. flip rotation from IMU to G into one that rotates G to IMU). I don't think it is proper to just re-order the xy.

jianxiapyh commented 1 year ago

Are you referring to the transformation of direction? If so, this is not what we are looking for. We are actually looking to change the coordinate system of RealSense's IMU to that of OpenGL.

goldbattle commented 1 year ago

Where you able to address this problem? I am not sure what else I can provide, but the IMU should be a right-handed coordinate system, which you likely need a transformation to map it into your OpenGL system.

jianxiapyh commented 1 year ago

Thanks for the info. Sorry for the late reply. We have been working on other deadlines recently and will look into the transformation soon and let you know if we have any updates.