etiennedub / pyk4a

Python 3 wrapper for Azure-Kinect-Sensor-SDK
MIT License
287 stars 81 forks source link

Changing colorized depth image back into depth map (or point cloud) #135

Open mbusch-regis opened 3 years ago

mbusch-regis commented 3 years ago

I have a rather large number of frames that were split into standard color and colorized depth images and I've been asked to find a way to convert them to point clouds.

I did try using the depth_image_to_point_cloud transformation function and then loading it into an Open3D PointCloud:

pcl = pyk4a.depth_image_to_point_cloud(depth=depth, calibration=capture._calibration, thread_safe=capture.thread_safe)
pcloud = o3d.geometry.PointCloud()
pcloud.points = o3d.utility.Vector3dVector(np.asarray(pcl[0])[:, :3])
o3d.visualization.draw_geometries([pcl])

But the result looks nothing like the original.

How do I "un-colorize" the transformed depth images so that I can use Open3D to convert the pair to a point cloud?

Sorry for what is probably a basic question. I'm an image-processing noob.

lpasselin commented 3 years ago

Hi,

1) The documentation of o3d.utility.Vector3dVector says Convert float64 numpy array of shape (n, 3) to Open3D format. You probably need to replace np.asarray(pcl[0])[:, :3] by pcl.reshape((-1, 3)).

See this example https://github.com/etiennedub/pyk4a/blob/0554de54a65b7fc578fa439c2235140d8fd3d72d/example/viewer_point_cloud.py#L33

2) Also, on the last line, you are directly reusing pcl. I think you need to visualize pcloud

mbusch-regis commented 3 years ago

Thank you!

1) That transformation gets me MUCH closer.

2) Thank you -- cut & paste error condensing cells of a notebook. I probably wouldn't have caught it for a while, though.


The remaining problem is I see two identical figures side-by-side when there should only be one. Could this be due to the fact that the images were recorded on a different Kinect than the one hooked up currently and so the calibration is different? How would I correct for it?

lpasselin commented 3 years ago

I don't understand. Can you share more context?

The calibration object must be associated to the Kinect that was used to capture.

mbusch-regis commented 3 years ago

Sure -- sorry I wasn't clear.

I have a directory full of rgb and depth images that were taken by a colleague several weeks ago with Kinect A. I am currently trying to take those images and generate point clouds, while Kinect B is connected to my computer. The plan when we recorded the images was to use Open3D's geometry.RGBDImage.create_from_color_and_depth function to create the point clouds.

Unbeknownst to me, the colorized depth images were what got saved, and the Open3D function above won't use the transformed depth image.

It may also be worth mentioning that I have an intrinsic.json file saved from Kinect A under similar conditions to these images.

lpasselin commented 3 years ago

Probably you can recreate the calibration A with the intrinsics.json file. See the various calibration functions related to that. I'm not super familiar with this part of the library.

What do you mean by colorized depth image? You mean transformed depth? If so, depth to point cloud requires a regular depth image iirc. It is possible to specify that the depth is transformed already but we would need to modify depth_to_point_cloud

lpasselin commented 3 years ago

No it's already implemented. See argument calibration_type_depth

mbusch-regis commented 3 years ago

Yes, depth images that have been saved after applying the transform that changes it from looking mostly black to being false colored (for humans). How do I undo that transformation?

Hmmm... setting calibration_type_depth=True didn't seem to help, but maybe that is because of the transformed image?

lpasselin commented 3 years ago

Set it to false

mbusch-regis commented 3 years ago

depth_image_to_point_cloud returns nothing when I set calibration_type_depth=False. Sorry to be a bother.

lpasselin commented 3 years ago

Ok not sure why it doesn't work. This might be a bug in the corresponding function pyk4a.cpp

We dont have functions to transform back transformed-depth into regular depth. Since the transformation would lose information. In your case you don't need that though.

The point cloud function should work properly with color reference I think.

mbusch-regis commented 3 years ago

I see lots of people wanting to make false-color depth images, but it doesn't seem anyone wants to go the other way. Go figure.

wangmiaowei commented 3 years ago

@lpasselin @mbusch-regis @shagren Is there any method to transform the depth image(*png) to point cloud ?

shagren commented 3 years ago

@wangmiaowei , depth_image_to_point_cloud() Please be sure that png image is 16 bit grayscale image. You must load this image via depth=cv2.imread('test.png', cv2.IMREAD_ANYDEPTH)

You need also calibration. You can get calibration from opened device or saved mkv file. You can also dump calibration to json file before and then restore from it.

wangmiaowei commented 3 years ago

@shagren Thanks for your suggestion. I am really a freshman to the pyk4a. Another problem appears:is there any method to project the point cloud back to depth image?

shagren commented 3 years ago

@wangmiaowei , like as there is no enough method

lpasselin commented 3 years ago

depth_image_to_point_cloud returns nothing when I set calibration_type_depth=False. Sorry to be a bother.

We have to look into this. It is the reason why I assigned label Bug.