mabaorui / NeuralPull

Implementation of ICML'2021:Neural-Pull: Learning Signed Distance Functions from Point Clouds by Learning to Pull Space onto Surfaces
MIT License
179 stars 28 forks source link

Question about the quantitative comparison under the dataset released by Points2Surf #15

Closed bearprin closed 1 year ago

bearprin commented 1 year ago

Dear Baorui,

First of all, I show my appreciation to you for your Neural-Pull code. But I found that under the dataset released by Points2Surf such as FAMOUS no-noise dataset, the produced chamfer-distance results of your method are inconsistent with Table 1 in the main paper. I used the evaluation code released by Points2Surf.

Chamfer Points2Surf(Ori Paper) Points2Surf(My result) Neural-Pull(Your Paper) Neural-Pull (My result)
FAMOUS no-noise 1.41 1.46 0.22 3.41

I have no idea what happened. Could you help me? Thank you very much!

P.S. I only found the chamfer-distance formulation defined by occupancy_networks in your code, and I haven't seen the chamfer-distance formulation of Points2Surf. Based on my understanding, the chamfer distance defined by occupancy_networks and Points2Surf is different (ErlerPhilipp/points2surf#20). I guess that it may need to utilize the chamfer-distance formulation given by Points2Surf to evaluate rec results under the dataset released by Points2Surf for fairness, just like POCO doing.

So do you think the statistics in your paper are a little misleading?

Best

https://github.com/mabaorui/NeuralPull/blob/c093a52308a9b74446d24cc6c1b0fee5ee5bb7bb/NeuralPull.py#L92-L139

mabaorui commented 1 year ago

Hi,

Thanks for your interest in our work.

We did not check the implementation of Points2Surf when we submitted our paper since we noticed that Points2Surf also leveraged the same CD equation to evaluate the results (Eq.10 in Points2Surf paper). Thanks.

bearprin commented 1 year ago

Hi, I got your point.

From the code, we can find that Points2Surf does not use Eq.10 (shown below) of its main text in the implementation.

image

They actually use the below chamfer distance in the code https://github.com/ErlerPhilipp/points2surf/blob/2af6e0facf58422ed12e0c676c70199cd0dfbb43/source/base/evaluation.py#L222-L256

image

Thus, I think it may be better to evaluate results under the same metric formulation for fairness just as POCO does since the mean operator(normalization operator) has an important impact on the quantitative results (off by a 10000 factor here).

Btw, in my understanding, below is your chamfer distance formulation. It seems this formulation is a little bit different from Eq.10 which does not have a half operator, am I right? Thank you.

image

https://github.com/mabaorui/NeuralPull/blob/c093a52308a9b74446d24cc6c1b0fee5ee5bb7bb/NeuralPull.py#L131-L139

abc-def-g commented 1 year ago

I also notice this issue. @mabaorui you probably used the wrong error metric to do the comparison. It is meaningless to do the comparisons with different metrics for different methods. Please check it seriously. Waiting for your reply!

mabaorui commented 1 year ago

Thank you for your attention, the questions raised are constructive for our work, but we prepared the paper without checking the implementation of Points2Surf. Since the paper is an official publication, we consider the formula in the paper to be reliable(Eq.10 in Points2Surf paper). Some of the methods we compared had a half operator, and some of them did not. We followed the metrics mentioned in each method paper and reported the results in our paper. We just forgot to modify the code submitted to GitHub, which was our mistake, and we will correct this part of the code later.