isl-org / Open3D

Open3D: A Modern Library for 3D Data Processing
http://www.open3d.org
Other
11.22k stars 2.28k forks source link

How to capture depth image from a point cloud? #1152

Closed lvgeng closed 4 years ago

lvgeng commented 5 years ago

It seems to be a simple question that has been answered (https://github.com/intel-isl/Open3D/issues/402) but I am confused about how it works. besides, I do not want a animation trajectory, I just want to capture a few depth images of a point cloud from some positions, as shown in the figure.

I wish to have a continuous depth map without holes, is it possible? Besides, currently it requires to run o3d.visualization.Visualizer().run() before capturing the images and it seems to be extremely slow in the example, is it possible to run it faster?

And it seems that for camera_parameters.intrinsic.set_intrinsics(width=640, height=480, fx=2000, fy=2000, cx=319.5, cy=239.5)

It has to be cx == (fx-1)/2, cy == (fy-1)/2

Is it possible to deal with this?

Screenshot from 2019-09-02 14-56-32

from open3d import *
import numpy
import transforms3d
from scipy.spatial.transform import *
import matplotlib.pyplot as plt

original_pose = numpy.array([[1, 0, 0, 0],
                             [0, 1, 0, 0],
                             [0, 0, 1, 2],
                             [0, 0, 0, 1]])

rotations_eul = numpy.random.randint(-50, 50, size=(4000, 3)) / 100.0

point_list = []

for i, eu in enumerate(rotations_eul):
    pose = numpy.dot(transforms3d.affines.compose([0, 0, 0],
                                                  Rotation.from_euler("xyz", eu).as_dcm(),
                                                  [1, 1, 1]),
                     numpy.asarray(original_pose))
    point = numpy.dot(pose, numpy.array([0,0,0,1]).T).T[0:3]
    point_list.append(point)

pcd = PointCloud()
pcd.points = Vector3dVector(point_list)

camera_parameters = camera.PinholeCameraParameters()
camera_parameters.extrinsic = numpy.array([[1,0,0,1],
                                           [0,1,0,0],
                                           [0,0,1,2],
                                           [0,0,0,1]])
camera_parameters.intrinsic.set_intrinsics(width=1920, height=1080, fx=1000, fy=1000, cx=959.5, cy=539.5)

viewer = visualization.Visualizer()
viewer.create_window()
viewer.add_geometry(pcd)
viewer.run()

control = viewer.get_view_control()
control.convert_from_pinhole_camera_parameters(camera_parameters)

depth = viewer.capture_depth_float_buffer()
print("show depth")
print(numpy.asarray(depth))
plt.imshow(numpy.asarray(depth))
plt.imsave("testing_depth.png", np.asarray(depth), dpi = 1)
plt.show()

However I wish to have a continuous depth map without holes, is it possible? Besides, currently it requires to run o3d.visualization.Visualizer().run() before capturing the images and it seems to be extremely slow in the example, is it possible to run it faster?

syncle commented 4 years ago

I wish to have a continuous depth map without holes, is it possible?

how about making a mesh first? The rendered mesh should be denser than point cloud when it is rendered.

lvgeng commented 4 years ago

I wish to have a continuous depth map without holes, is it possible?

how about making a mesh first? The rendered mesh should be denser than point cloud when it is rendered.

A problem is that there is no point cloud to mesh function in open3D... any suggestion? As long as I can see Open3D only provide the functions to generate mesh from RGBD images, or did I miss anything important?

And the most important thing. control the camera directly without loading a json file. Is it possible?

griegler commented 4 years ago

Open3D 0.8 includes the ball pivoting algorithm and #1317 adds Poisson surface reconstruction.

lvgeng commented 4 years ago

Open3D 0.8 includes the ball pivoting algorithm and #1317 adds Poisson surface reconstruction.

Great I will look into it. And the other problem.... the view control. Is there any better way to control the camera while trying to render something?

griegler commented 4 years ago

No, I think not. What would be more convenient?

lvgeng commented 4 years ago

No, I think not. What would be more convenient?

Like a regular graphic engine? Even OpenGL style would be OK.

Now it seems that the only way to control the camera position, which direction it looks at is setting the extrinsic, while the FOV is controlled by the intrinsics. And the rendering result seems to be somehow related to the current window size.

It is kind of confusing. Or I think I might did something wrong.

griegler commented 4 years ago

I see. Open3D uses a camera model and notation that is common in computer vision. But we can think about convenience functions that converts between different conventions.

lvgeng commented 4 years ago

I see. Open3D uses a camera model and notation that is common in computer vision. But we can think about convenience functions that converts between different conventions.

It is convenient while using it for processing. But resampling and visualization is a different story... And usually they are needed...

Thank you!

lvgeng commented 4 years ago

Open3D 0.8 includes the ball pivoting algorithm and #1317 adds Poisson surface reconstruction.

Hi there. In the https://github.com/intel-isl/Open3D/pull/1317 It is mentioned that modern compiler is needed. Do I need to do some modification to compile the code pulled from git hub?

griegler commented 4 years ago

If you are using gcc, any version >=5 should work.

lvgeng commented 4 years ago

If you are using gcc, any version >=5 should work.

I see. Just having some problem compiling its python binding... always hard to link it to my environment...

================================== Solved.

lvgeng commented 4 years ago

Open3D 0.8 includes the ball pivoting algorithm and #1317 adds Poisson surface reconstruction.

Great I will look into it. And the other problem.... the view control. Is there any better way to control the camera while trying to render something?

Em.... tried the create_from_point_cloud_poisson, but have no idea how to use the create_from_point_cloud_ball_pivoting. According to the description it is something would generate a convex mesh (or at least a close mesh) which is not exactly what I need.

Is there anyway to generate some more simple results? I give the function five points in the same plane, is there anyway for me to get a relatively plat surface? I wish the result is just connecting the existing points with lines rather than generate a surface.

Screenshot from 2019-12-03 22-29-16 Screenshot from 2019-12-03 22-29-40

griegler commented 4 years ago

BPA should be able to handle that (it is not restricted to convex shapes). You have to think about a 3D ball with a given radius that roles over your surface points. Whenever it hits three points with compatible normals (and no other point is within the ball) it creates a triangle - connecting the three points. I think you were referring to alpha shapes?

lvgeng commented 4 years ago

BPA should be able to handle that (it is not restricted to convex shapes). You have to think about a 3D ball with a given radius that roles over your surface points. Whenever it hits three points with compatible normals (and no other point is within the ball) it creates a triangle - connecting the three points. I think you were referring to alpha shapes?

I see... well I am kind of confuse about the second parameter in the function create_from_point_cloud_ball_pivoting I assume one of them means the initial size of the ball, but what is the other? Step size?

And I got an interesting result with this.

The point cloud is downsampled with voxel_size= 0.0001. Its bonding box should be about 0.030.030.01

mesh = open3d.geometry.TriangleMesh.create_from_point_cloud_ball_pivoting(pcd, open3d.utility.DoubleVector([0.00001, 0.0003]))

The generated mesh have a lot of holes. I assume that is because the point cloud is too dense at some part? But the pattern shows there is something more, is there any good reason?

Screenshot from 2019-12-04 22-03-11

Here is the point cloud I have.

tile_group_00.zip

griegler commented 4 years ago

The second parameter is a list of radii (of the ball). So, the BPA will be run with all the varying radii, usually you should provide them in increasing order. Did you try to increase the radius? Maybe the points are just farther apart (the ball drops through).

lvgeng commented 4 years ago

tile_info_dict_groups[group_id]

Screenshot from 2019-12-05 12-29-07 I think the point cloud might be too dense in some part. These are exactly where the holes appear. Any suggestion on how to tweak this?

I tried to increase or decrease the radius. No luck. Kind of strange because the result did not change so much with this point cloud.

PS: is there anyway for me to know the UV map of the generated mesh? (It seems that the generated mesh lose a lot of color information.)

griegler commented 4 years ago

These are exactly where the holes appear. Any suggestion on how to tweak this?

You could apply a voxel filter?

is there anyway for me to know the UV map of the generated mesh? (It seems that the generated mesh lose a lot of color information.

There is no uv mapping implemented yet. But BPA just re-uses the color values of the points that are provided as input. Given that BPA discards not a lot of points, there is no reason that the color information is lost.

lvgeng commented 4 years ago

These are exactly where the holes appear. Any suggestion on how to tweak this?

You could apply a voxel filter?

is there anyway for me to know the UV map of the generated mesh? (It seems that the generated mesh lose a lot of color information.

There is no uv mapping implemented yet. But BPA just re-uses the color values of the points that are provided as input. Given that BPA discards not a lot of points, there is no reason that the color information is lost.

Voxel filter? Any suggestions? What kind of filter? And... it is kind of weird. That is the results from voxel downsampling, but the result is not so evenly distributed... any suggestions on what I should do to avoid that?

If there is no UV map... any possibility to get the triangle soup?

lvgeng commented 4 years ago

These are exactly where the holes appear. Any suggestion on how to tweak this?

You could apply a voxel filter?

is there anyway for me to know the UV map of the generated mesh? (It seems that the generated mesh lose a lot of color information.

There is no uv mapping implemented yet. But BPA just re-uses the color values of the points that are provided as input. Given that BPA discards not a lot of points, there is no reason that the color information is lost.

I read through the related paper... Well it seems that BPA cannot deal with point cloud with different density originally, and it seems that a dense point cloud would lead to some kind of unknown issue that make the mesh impossible to be displayed with the viewer.

Additionally... The viewer does not accept camera with a small FOV, which is weird.

griegler commented 4 years ago

Sorry, I am not really into the viewer part of the code. Maybe you could open another issue describing the problem in detail. @yxlao, @qianyizh might be able to help then.

pablospe commented 4 years ago

The viewer does not accept camera with a small FOV, which is weird.

This is related to issue https://github.com/intel-isl/Open3D/issues/1427

yanyan-li commented 1 year ago

The window can be set as visible=False. Thanks for the design.

if our remote server doesnt have GUI for "viewer.create_window()", are there any solutions for render depth maps from a dense mesh map? Thanks a lot.