Closed zhuqiangLu closed 2 years ago
well, I managed to find a solution for this issue on the Internet, but here is another problem.
To reconstruct a point cloud, I render a depth map from render.render_to_depth_image()
with z_in_view_space=True
. Then I get the numpy array of this depth image for point cloud reconstruction as described in this medium blog, with a set of fake intrinsic. (I removed the inf entries in the numpy array as well)
but the point cloud looks distorted.
I guess this has something to do with the extrinsic. So, how do I get the extrinsic of a camera in an offscreen renderer?
I am new to all these 3D and camera concepts, I hope this question is not too stupid.
Hi, i tried the reconstruction system pipeline and the result point cloud with z capped to one. I modified the depth_scale with no difference. Could you plz advise how you solve it? Any help is much appreciated. Thanks
Hi all,
I have been trying to reconstruct a partial point cloud from a depth map, where the depth map is rendered by an off-screen renderer which reads a mesh. The depth image looks fine to me, but the reconstructed point cloud is basically a flat plane as the z-axis values range from [0, 1]. I have tried different depth_scale value including 1/1000 for up scaling the depth channel, but no luck.
here is the script (the majority of the script is taken from another post, but I could not find it anymore)
Here is the screenshot of the result