etiennedub / pyk4a

Python 3 wrapper for Azure-Kinect-Sensor-SDK
MIT License
291 stars 81 forks source link

Function "color_image_to_depth_camera" not working #177

Closed Krissy93 closed 2 years ago

Krissy93 commented 2 years ago

I have some issues regarding the color transformation to depth image. I need a point cloud with color applied on it. This should be done by transforming the color image into the depth frame to get the same number of color points as in the point cloud. Then I'll be able to get a table like this: x y z R G B (for each point).

My code is as follows. I open a MKV then extract the cloud and the color frame (need to convert it because it's MJPEG). However, the code suddenly stops when reaching the last line without returning anything, no error no transformed image (my code continues after that line but when I run the whole thing it just stops at that line). What am I doing wrong?

Just to complete, when I use capture.transformed_color I get an error saying that my capture.color image needs to be converted to RGBA, But if I try to edit the source code to explicitly do so, it exits in the same way. Is the transformation function broken? Can anyone provide me with a working example? Thank you!

playback = pyk4a.PyK4APlayback(path_mkv)
playback.open()
duration = playback.length / 1000000
frames = int(duration)*30

for i in range(0, frames):
    try:
        capture = playback.get_next_capture()
        if capture.color is not None and capture.depth is not None:
            if capture.depth_point_cloud is not None:
                # gets the point cloud
                points = capture.depth_point_cloud.reshape((-1, 3))
                # converts to RGBA
                color = convert_to_bgra_if_required(playback.configuration["color_format"], capture.color)
                depth = capture.depth
                # creates the transformed color image
                transformed = pyk4a.color_image_to_depth_camera(color, depth, playback.calibration, playback.thread_safe)
Krissy93 commented 2 years ago

Yes, I used a workaround to solve it. Basically, the issue is that the image format must be first converted to be read properly. I copy-and-pasted this function from the examples in my script:

def convert_to_bgra_if_required(color_format: pyk4a.ImageFormat, color_image):
    # examples for all possible pyk4a.ColorFormats
    if color_format == pyk4a.ImageFormat.COLOR_MJPG:
        color_image = cv2.imdecode(color_image, cv2.IMREAD_COLOR)
    elif color_format == pyk4a.ImageFormat.COLOR_NV12:
        color_image = cv2.cvtColor(color_image, cv2.COLOR_YUV2BGRA_NV12)
        # this also works and it explains how the COLOR_NV12 color color_format is stored in memory
        # h, w = color_image.shape[0:2]
        # h = h // 3 * 2
        # luminance = color_image[:h]
        # chroma = color_image[h:, :w//2]
        # color_image = cv2.cvtColorTwoPlane(luminance, chroma, cv2.COLOR_YUV2BGRA_NV12)
    elif color_format == pyk4a.ImageFormat.COLOR_YUY2:
        color_image = cv2.cvtColor(color_image, cv2.COLOR_YUV2BGRA_YUY2)
    return color_image

Then my main function is as follows:

def main(file_path, save_path):
        playback = pyk4a.PyK4APlayback(file_path)
        playback.open()
        duration = playback.length / 1000000
        frames = int(duration)*30

        for i in range(0, frames):
            try:
                capture = playback.get_next_capture()
                if capture.color is not None and capture.depth is not None:
                    if capture.depth_point_cloud is not None:
                        points = capture.depth_point_cloud.reshape((-1, 3))
                        # get correct color format
                        color = convert_to_bgra_if_required(playback.configuration["color_format"], capture.color)
                        color = cv2.cvtColor(color, cv2.COLOR_RGB2RGBA)
                        depth = capture.depth
                        # now apply transformation
                        transformed = pyk4a.color_image_to_depth_camera(color, depth, playback.calibration, playback.thread_safe)[:,:,:3]
                        # colors will now be concatenated with point cloud data points
                        colors = transformed.reshape((-1,3))
                        data = np.concatenate((points, colors), axis=1)

                        # save
                        path = os.path.join(save_path, 'cloud_' + str(i) + '.txt')
                        with open(path, 'w+') as file:
                            np.savetxt(file, data)
                        print('Saved cloud number: ', i)
                    else:
                        print('No cloud, skipped number: ', i)
                else:
                    print('No synchro, skipped number: ', i)

            except EOFError:
                print('Exiting, error occurred')
                break
        playback.close()
cansik commented 2 years ago

@Krissy93 Thanks your for the example. I guess it would make sense to create a pull request and add this to the library directly.