Open hzx123q opened 2 years ago
the files are shown as below: horse.txt sample_horse.txt
I'm afraid there is something wrong with your point file sample_horse.zip Here's what I get when I convert it to .ply format.
Thank you for your reply!I solve the problem when I resample the point cloud.And I have some other questions to ask you~ I compute the horse's normal with pcl library and the result is not fine. The picture 1 is reconstructed with pointweight=4,and the picture 2 is reconsturcted with pointweight=0.
I have tried diffrent number of neighbor points(3,5,20,50) when I compute the normal with pcl library.Though the computed normals are diffrent , but the reconstructed results looks similar:One side of the horse's face and between the legs of the horse have the redundant parts.
Does the redundant parts produce with wrong estimated normal or produce with the reason as you mentioned in the paper of 2006?
one of the files with normals is as below: horse_nn.npts.txt
The picture in your paper(connects the two feet):
I'm guess that while PCL accurately computes the normal line through the points, it gets the orientation wrong. To quote from PCL (https://pointclouds.org/documentation/tutorials/normal_estimation.html): "In general, because there is no mathematical way to solve for the sign of the normal, its orientation computed via Principal Component Analysis (PCA) as shown above is ambiguous, and not consistently oriented over an entire point cloud dataset."
There is, however, a bunch of work on consistently "signing" the normal. Starting with the pioneering work of Hoppe e al. "Surface Reconstruction from Unorganized Points" from 1992 and going on through the last couple of years (e.g. "Parallel Globally Consistent Normal Orientation of Raw Unorganized Point Clouds" by Jakob et al. from 2019). It may make sense to look there.
Thank you very much! I uses the PCL's function setViewPoint to Determine the point cloud orientation,and I set the view point value as (0,0,0).But it maybe not work and I will do some research on the point cloud orientation.
I would guess that what PCL is trying to do is disambiguate the sign by using the camera information. In particular, for a point to be seen from a particular view-point, the dot-product of the view direction and the surface normal must be negative.
Unfortunately, that won't help you since your point-cloud is such that it cannot all be seen from a single viewpoint. (That is, from any viewpoint, some of the points will be back-facing.)
Hi,professor I solve the orientation problem using a open source code that computes normals by many patches(https://github.com/galmetzer/dipole-normal-prop).In this way,I can reconstruct the horse with a good result. But when I try to reconstruct a room which I filtered out the point clouds on the roof and floor(the room's point clouds is shown in picture 1,the reconstruct result is shown in picture 2).The result seems not good,is there any attentions to reconstruct the large scene?(like a room or a street) I would appreciate it, if you could give me some suggestions on reconstructing the large scene.
My guess is that, as before, the problem stems from disambiguating normals. The room scene is substantially harder so I am not surprised that you are having trouble with it.
In general, when you have point cloud datasets like these you also tend to have information about the camera orientation. If you can get access to that, disambiguating the sign should be straightforward. (See comment above the dot-product of the normal and the view direction having to be negative.)
Thank you professor!My data comes from the sensor of lidar.I'll try to take lidar information into account when defining the normal direction of the room~
After I sampled 50000 of 100000 point clouds of horses, the reconstructed results were very strange. The picture 1 using 100000 points for PoissonRecon and the picture 2 using 50000 points for PoissonRecon