code-iai / iai_kinect2

Tools for using the Kinect One (Kinect v2) in ROS
Apache License 2.0
872 stars 517 forks source link

Transforming iai_kinect2 coordinate system to ROS coordinate system #565

Closed prasuchit closed 3 years ago

prasuchit commented 3 years ago

Hi,

I'm using Kinect v2 with ROS and I need to publish the 3D coordinates of the pixels wrt the world frame. I have a static transform publisher setup using a launch file that gives the position and orientation of the camera wrt ROS world origin. Now if I visualize the point cloud of kinect on Rviz, I see it this way.

WhatsApp Image 2021-01-19 at 7 35 44 PM

Now if I'm not misunderstanding this, this is the coordinate system of the kinect camera and that is how the pointcloud is being displayed.

1*85uPFWLrdVejJkWeie7cGw

But as you well know, ROS has the following coordinate axis:

https://www.ros.org/reps/rep-0105/ECEF_ENU_Longitude_Latitude_relationships.svg

And now I'm confused as to how I can use the image data and the static transform publisher from kinect2_link to root to publish the 3D world coordinates of the pixels on the tf topic correctly. Please help.

prasuchit commented 3 years ago

For people who might face a similar issue in the future. Here's the mistake I was making.

If you read through this answer, under "1. While capturing an object with the Kinect", it talks about the camera coordinates, world coordinates, and extrinsic calibration. It also talks about how it is conventional to assume the camera frame is the (0,0,0) of the world and calculate the object coordinates wrt that.

That is where I had a mistaken understanding of things. I was using the publish_tf argument of the kinect2_bridge and also publishing a static transform from root to kinect2_rgb_optical_frame. The pointcloud is published wrt the camera coordinate system and that is the reason I was seeing the points in the above-shown orientation. The static transform publisher was assuming the ROS coordinate system and hence these two were conflicting with each other and I ended up with a huge mess. So you actually don't need to publish camera tf unless you want to perform some transformations between the RGB and depth image, for instance.

What I did is I published the static transform from the launch file between root and kinect2_rgb_optical_frame. I calculate an RGB point's 3D location wrt the camera frame using the corresponding depth frame and orthogonal projection technique. Then I store this as tf2_geometry_msgs PoseStamped message and have the cameramodel tf frame as the header frame id. I use tf2 library's lookup_transform and do_transform_pose to transform this point wrt root and published it on a different topic.

More useful links: link1, link2, link3, link4

Sidenote: Within the kinect2_bridge.cpp, there is a method called publishStaticTF that can be used to rotate the pointcloud if necessary. Although here it wasn't necessary to do all that.