Open Linusnie opened 11 months ago
HI @Linusnie , did you manage to resolve the above issue? I'm facing the same.
@kbmufti kind of, I moved on to using the pytorch version, where the pre-processing is applied automatically if you pass in a .xyz file see here
I should say the sampling settings in the pytorch repo are slightly different from the tensorflow implementation, so it would still be nice to have the original .npy
files for reproducing the paper results. For example: in the pytorch repo the local sigma scale is set automatically based on the number of points, while it's fixed in tensorflow. The pytorch repo also auto-scales the point cloud to fit in a 1x1x1 bounding box centered at 0.
@Linusnie , Thank you for you response, What version of python are you using? I am getting dependencies conflicts
I followed the instructions here https://github.com/mabaorui/NeuralPull/issues/2#issuecomment-880653009
hi, many thanks for you work on this method
I'm looking into reproducing your results on the Famous dataset. But I noticed that the .npz files are missing from the download link in the readme (https://drive.google.com/drive/folders/1qre9mgJNCKiX11HnZO10qMZMmPv_gnh3?usp=sharing). Would you be able to add those files in?
I also tried recreating the point clouds with
python sample_query_point.py --out_dir /home/linus/workspace/data/neural_pull/famous_new/ --input_dir /home/linus/workspace/data/points2surf/famous_noisefree/04_pts/ --dataset famous
, but I get the errorSince the 3DBenchy dataset has less than
POINT_NUM_GT
(=20000) points. Could you clarify what value ofPOINT_NUM_GT
was used for the famous dataset?