Until now i used realsense 435 to do my project. I always used rs2_deproject_pixel_to_point to transform depth image to point cloud. But recently I have to start Azure kinect. It looks like k4a_transformation_depth_image_to_point_cloud can do the same thing as rs2_deproject_pixel_to_point . But I cannot find a good example for it. I donnot how to use it. For example, I dont know what is transformation_handle. And I also dont know how to change K4A_CALIBRATION_TYPE_COLOR to K4A_IMAGE_FORMAT_DEPTH16. And what it camera? It is difficult to understand only fromhttps://microsoft.github.io/Azure-Kinect-Sensor-SDK/master/group___functions_ga7385eb4beb9d8892e8a88cf4feb3be70.html. Please help me.
Until now i used realsense 435 to do my project. I always used rs2_deproject_pixel_to_point to transform depth image to point cloud. But recently I have to start Azure kinect. It looks like k4a_transformation_depth_image_to_point_cloud can do the same thing as rs2_deproject_pixel_to_point . But I cannot find a good example for it. I donnot how to use it. For example, I dont know what is transformation_handle. And I also dont know how to change K4A_CALIBRATION_TYPE_COLOR to K4A_IMAGE_FORMAT_DEPTH16. And what it camera? It is difficult to understand only fromhttps://microsoft.github.io/Azure-Kinect-Sensor-SDK/master/group___functions_ga7385eb4beb9d8892e8a88cf4feb3be70.html. Please help me.