mkazhdan / PoissonRecon

Poisson Surface Reconstruction
MIT License
1.59k stars 431 forks source link

the reconstructon result for PoissonRecon is strange #214

Open hzx123q opened 2 years ago

hzx123q commented 2 years ago

After I sampled 50000 of 100000 point clouds of horses, the reconstructed results were very strange. The picture 1 using 100000 points for PoissonRecon and the picture 2 using 50000 points for PoissonRecon 1644757622(1) 1644757608(1)

hzx123q commented 2 years ago

the files are shown as below: horse.txt sample_horse.txt

mkazhdan commented 2 years ago

I'm afraid there is something wrong with your point file sample_horse.zip Here's what I get when I convert it to .ply format.

hzx123q commented 2 years ago

Thank you for your reply!I solve the problem when I resample the point cloud.And I have some other questions to ask you~ I compute the horse's normal with pcl library and the result is not fine. The picture 1 is reconstructed with pointweight=4,and the picture 2 is reconsturcted with pointweight=0. 1644769762

1644769789(1)

hzx123q commented 2 years ago

I have tried diffrent number of neighbor points(3,5,20,50) when I compute the normal with pcl library.Though the computed normals are diffrent , but the reconstructed results looks similar:One side of the horse's face and between the legs of the horse have the redundant parts.

Does the redundant parts produce with wrong estimated normal or produce with the reason as you mentioned in the paper of 2006?

one of the files with normals is as below: horse_nn.npts.txt

The picture in your paper(connects the two feet): 1644770970(1)

mkazhdan commented 2 years ago

I'm guess that while PCL accurately computes the normal line through the points, it gets the orientation wrong. To quote from PCL (https://pointclouds.org/documentation/tutorials/normal_estimation.html): "In general, because there is no mathematical way to solve for the sign of the normal, its orientation computed via Principal Component Analysis (PCA) as shown above is ambiguous, and not consistently oriented over an entire point cloud dataset."

There is, however, a bunch of work on consistently "signing" the normal. Starting with the pioneering work of Hoppe e al. "Surface Reconstruction from Unorganized Points" from 1992 and going on through the last couple of years (e.g. "Parallel Globally Consistent Normal Orientation of Raw Unorganized Point Clouds" by Jakob et al. from 2019). It may make sense to look there.

hzx123q commented 2 years ago

Thank you very much! I uses the PCL's function setViewPoint to Determine the point cloud orientation,and I set the view point value as (0,0,0).But it maybe not work and I will do some research on the point cloud orientation.

mkazhdan commented 2 years ago

I would guess that what PCL is trying to do is disambiguate the sign by using the camera information. In particular, for a point to be seen from a particular view-point, the dot-product of the view direction and the surface normal must be negative.

Unfortunately, that won't help you since your point-cloud is such that it cannot all be seen from a single viewpoint. (That is, from any viewpoint, some of the points will be back-facing.)

hzx123q commented 2 years ago

Hi,professor I solve the orientation problem using a open source code that computes normals by many patches(https://github.com/galmetzer/dipole-normal-prop).In this way,I can reconstruct the horse with a good result. But when I try to reconstruct a room which I filtered out the point clouds on the roof and floor(the room's point clouds is shown in picture 1,the reconstruct result is shown in picture 2).The result seems not good,is there any attentions to reconstruct the large scene?(like a room or a street) I would appreciate it, if you could give me some suggestions on reconstructing the large scene.

1

1645366480(1)
mkazhdan commented 2 years ago

My guess is that, as before, the problem stems from disambiguating normals. The room scene is substantially harder so I am not surprised that you are having trouble with it.

In general, when you have point cloud datasets like these you also tend to have information about the camera orientation. If you can get access to that, disambiguating the sign should be straightforward. (See comment above the dot-product of the normal and the view direction having to be negative.)

hzx123q commented 2 years ago

Thank you professor!My data comes from the sensor of lidar.I'll try to take lidar information into account when defining the normal direction of the room~