microsoft / RoomAliveToolkit

Other
715 stars 191 forks source link

Calibration data understanding. #13

Closed skrealworld closed 9 years ago

skrealworld commented 9 years ago

Hello there, thanks for such helpful toolkit. I got the calibration and now we have that calibration file. I am trying to understand the calibration data. I just want to confirm that I got it understand correctly. So you first you have camera pose matrix which is 4x4, this signifies the camera and universal co-ordinates system. Later color camera matrix(3x3-intrinsic) same with the depth color camera(3x3-intrinsic). And same with projector data(I am using only one pair of camera and projector). At the end there is projector pose matrix(4x4) which is co-ordinate system of projector. I want to map one image from camera view into projector view. So " transform pose from camera to projector and then use projector intrinsic to map 3D world of projector into 2D image" is right way to implement it?
I apologize if I am asking inappropriate question. Thanks and Best Regards

grandchild commented 9 years ago

Yes that's exactly how it works. The matrices are easily identifiable too. While the Camera pose is an identity, the projector pose matrix is filled and the projector intrinsics matrix is an upper triangular matrix with the 1,2-entry at 0.

On July 29, 2015 10:08:25 PM GMT+03:00, Sourabh Kulhare notifications@github.com wrote:

Hello there, thanks for such helpful toolkit. I got the calibration and now we have that calibration file. I am trying to understand the calibration data. I just want to confirm that I got it understand correctly. So you first you have camera pose matrix which is 4x4, this signifies the camera and universal co-ordinates system. Later color camera matrix(3x3-intrinsic) same with the depth color camera(3x3-intrinsic). And same with projector data(I am using only one pair of camera and projector). At the end there is projector pose matrix(4x4) which is co-ordinate system of projector. I want to map one image from camera view into projector view. So " transform pose from camera to projector and then use projector intrinsic to map 3D world of projector into 2D image" is right way to implement it?
I apologize if I am asking inappropriate question. Thanks and Best Regards


Reply to this email directly or view it on GitHub: https://github.com/Kinect/RoomAliveToolkit/issues/13

skrealworld commented 9 years ago

Thanks grandchild for the quick reply Suppose I want to map one image from Kinect camera view to projector view. In that case how to map the coordinate system of camera into the coordinate system of projector? One more quick question: In the calibration file you have projector pose matrix, is that extrinsic matrix for projector with respect to camera(because camera is 1,1,1)?

thundercarrot commented 9 years ago

All projector poses are relative to the first depth camera.

Regarding mapping, can you be very specific about you are trying to do? Do you want to map from the depth camera or the color camera? Do you have a 2D point (image coordinates) or a 3D point (in the depth or color camera coordinate frame)?

skrealworld commented 9 years ago

I am trying to map 3D point(from the view of Kinect) to 3D point(from the view of projector).

thundercarrot commented 9 years ago

I assume you mean a 3D point in the depth camera coordinate frame? This is a simple coordinate transform, i.e. 3D point x' in projector coords is A * x, where A is the projector's 4x4 pose matrix and x is the (homogenous) point in the depth camera. x' can then be projected to projector image coordinates using the intrinsics (look for the 'Project' function in the code).

skrealworld commented 9 years ago

Super awesome. thanks a ton Andy. That's the thing I wanted to confirm. Thanks a lot again.