Closed hlyf-xs closed 2 years ago
I think it could just be a coincidence. In general, to access the values of pixel (x, y) in an image, use image[y][x]
(or, equivalently, image[y,x]
). If you print out color.shape
(assuming color
is a numpy array), you should see that its shape is (height, width, 4)
. So, to access each of the 4 B/G/R/A values at pixel (x, y) of the color image, usecolor[y][x]
or color[y,x]
.
To get the depth value at that pixel, you must first transform the depth image into the frame of the color image. To do this, pyk4a
conveniently provides capture.transformed_depth
, allowing you to get the raw depth data at color pixel (x, y) using the expression capture.transformed_depth[y,x]
. However, you will need to use other transformations/functions to get the 3D X, Y, and Z coordinates of color pixel (x, y).
I confused that the coordinate of the depth camera is the opposite of the RGB, for example, the depth value of the coordinate color[x][y] is depth[y][x].