isl-org / Open3D

Open3D: A Modern Library for 3D Data Processing
http://www.open3d.org
Other
11.44k stars 2.31k forks source link

Depth Scale does not work when creating point cloud from depth map #4519

Closed zhuqiangLu closed 2 years ago

zhuqiangLu commented 2 years ago

Hi all,

I have been trying to reconstruct a partial point cloud from a depth map, where the depth map is rendered by an off-screen renderer which reads a mesh. The depth image looks fine to me, but the reconstructed point cloud is basically a flat plane as the z-axis values range from [0, 1]. I have tried different depth_scale value including 1/1000 for up scaling the depth channel, but no luck.

here is the script (the majority of the script is taken from another post, but I could not find it anymore)

def test():
    sphere = open3d.geometry.TriangleMesh.create_sphere(4.0)
    sphere.compute_vertex_normals()
    cylinder = open3d.geometry.TriangleMesh.create_cylinder(1.0, 4.0, 30, 4)
    cylinder.compute_triangle_normals()
    cylinder.translate([6, 2, 0.0])

    render = open3d.visualization.rendering.OffscreenRenderer(640, 480)
    mat = open3d.visualization.rendering.MaterialRecord()
    mat.shader = 'defaultLit'

    render.scene.add_geometry("sphere1", sphere, mat)
    render.scene.add_geometry("cylinder1", cylinder, mat)
    render.setup_camera(45, [0, 0, 0], [0, 0, -25.0], [0, 1, 0])

    cimg = render.render_to_image()
    dimg = render.render_to_depth_image()

    plt.subplot(1, 2, 1)
    plt.imshow(cimg)
    plt.subplot(1, 2, 2)
    plt.imshow(dimg)
    plt.show()
    intrinsic = open3d.camera.PinholeCameraIntrinsic(open3d.camera.PinholeCameraIntrinsicParameters.PrimeSenseDefault)
    cloud = open3d.geometry.PointCloud.create_from_depth_image(dimg, intrinsic, depth_scale=10000)
    open3d.visualization.draw_geometries([cloud])

Here is the screenshot of the result cap

zhuqiangLu commented 2 years ago

well, I managed to find a solution for this issue on the Internet, but here is another problem.

To reconstruct a point cloud, I render a depth map from render.render_to_depth_image() with z_in_view_space=True. Then I get the numpy array of this depth image for point cloud reconstruction as described in this medium blog, with a set of fake intrinsic. (I removed the inf entries in the numpy array as well) but the point cloud looks distorted. cap I guess this has something to do with the extrinsic. So, how do I get the extrinsic of a camera in an offscreen renderer?

I am new to all these 3D and camera concepts, I hope this question is not too stupid.

TWang1017 commented 2 years ago

Hi, i tried the reconstruction system pipeline and the result point cloud with z capped to one. I modified the depth_scale with no difference. Could you plz advise how you solve it? Any help is much appreciated. Thanks