Open tostercx opened 3 years ago
@tostercx it seems like if u are ingored that The depth camera is tilted 6 degrees downwards of the color camera, as shown below. you can Matrix change the depth image to color image
@UEBoy2019 I'm pretty sure the extrinsics matrix takes care of that? I've constructed that from AK's callibration.json that's attached to recording MKVs:
"Rt": {
"Rotation": [0.999981701374054, 0.00579096982255578, -0.0017357708420604467, -0.0055804005824029446, 0.99462145566940308, 0.10342639684677124, 0.0023253741674125195, -0.10341481864452362, 0.99463558197021484],
"Translation": [-0.032038252800703049, -0.0018160234903916717, 0.0041334992274641991]
},
Rotation converted to euler/degrees:
[ x: 5.9358613, y: 0.1332342, z: 0.3197359 ]
So about 6 degrees. And the translation is 32 mm which is the physical offset between the cameras - seems to check out. The point clouds align perfectly too - it's just the scale that is off for some reason :/
I've been experimenting with working with downscaling the color image to depth instead of the default in all the examples which is the other way around (
k4a_transformation_color_image_to_depth_camera
instead ofk4a_transformation_depth_image_to_color_camera
) to see if I can get any performance gains if working with less data.The problem is after calling
create_from_rgbd_image
with the depth cameras intrinsics and extrinsics I get a pointcloud that is ~8-9% smaller in scale than compared to working with the default color camera's image size / intrinsics. I probably missed a step somewhere as these should line up perfectly in theory?What I do:
k4a_transformation_color_image_to_depth_camera
pcd.create_from_rgbd_image
with rgbd, intrinsics and extrinsicsNot the best capture but should illustrate the problem. Generated from the same frame of data, light is in color camera's space, dark in depth's:
Original color space intrinsic.json:
Modified depth space intrinsic.json: