Closed bearprin closed 2 years ago
@bearprin yes, you're right. The mean is essentially a normalization by number of points in the subsamples. Looks like i forgot it in the code. Also note that the shapes are asumed to be normalized to unit cube size to get comparable results.
If the number of points in the subsamples is always the same, the error will be off by a constant factor (10000) here. Because of the default optimization in trimesh's surface sampling, which rejects too close samples, the correct CD might be ~5% lower than stated in the paper. However, this should apply to all methods very similarly. I'm sorry for this inaccuracy. A PR would be very welcome.
Thanks for your reply!
The mean operator seems to have an important influence on the results because of the different formulations of CD.
I have noticed that since Neural-Pull claimed they could get 0.48 CD (Points2Surf gets 1.41 CD) under the FAMOUS no-noise dataset, which in contrast to my re-train results. And the new CVPR22 paper POCO gets 1.34 CD, which is still the same magnitude as Points2Surf. I am confused about what happened leading to 0.48 CD.
After that, I found Neural-Pull exploited the CD formulation of ConvONet with the additional mean operator, which significantly reduces the distance and makes the comparison seems unfair.
https://github.com/mabaorui/NeuralPull/blob/master/NeuralPull.py##L92-L138
@mabaorui
will improve this in the follow-up work
Hi,
I have noticed your CD formulation in the main paper has the mean operator, but in the implementation without that. The formulation of implementation seems like # http://graphics.stanford.edu/courses/cs468-17-spring/LectureSlides/L14%20-%203d%20deep%20learning%20on%20point%20cloud%20representation%20(analysis).pdf that directly sum distance.
https://github.com/ErlerPhilipp/points2surf/blob/2af6e0facf58422ed12e0c676c70199cd0dfbb43/source/base/evaluation.py#L222-L256
Am I right?
Best