MIT-TESSE / tesse-interface

Python interface to an Unity project with TESSE
GNU General Public License v2.0
3 stars 4 forks source link

Camera extrinsic and relationship to relative pose #5

Closed JD-ETH closed 4 years ago

JD-ETH commented 4 years ago

I have trouble properly correlate different frames of observations into a common frame, can you maybe point out if any of my assumptions made are wrong:

  1. The pose from the simulation contains zero noise, as well as the pinhole projection of the camera. The returned pose following the convention (x, z, yaw(rad)), and can be converted to a 6DOF pose.
  2. According to the extrinsic from the getCamerInformation, the depth/rgb_left camera is located at (-0.05, 0, 0) with identity rotation of the robot frame.
  3. To recover the scene in global frame, reproject the depth image and first convert to robot frame via extrinsics, then convert to common frame via robot pose.

However, my reprojected point clouds are always misaligned whenever rotation is present. If I relocate the camera to be at (0,0,0) instead, the point clouds look perfectly aligned in the common global frame. Thus I would like to ask again whether the assumption I made about the extrinsics is correct? That they are defined in the robot frame that shares the same coordinate system convention as the image frame? East-Down-North = x-y-z

Thanks a lot!

griffith826 commented 4 years ago

My suspicion is that this has to do with the coordinate systems. Unity actually uses a left-hand coordinate system under the hood, while nearly all robotics applications are right-handed. We have a brief note here explaining the coordinates. If you are querying the simulation via tesse-interface, then you'll be getting back information in the left-handed coordinate system. For the competition, I believe we are providing pose information using a right-handed system as that is a more typical convention for robotics.

JD-ETH commented 4 years ago

Consider it closed as this seems more of an unity issue.