xxlong0 / SparseNeuS

SparseNeuS: Fast Generalizable Neural Surface Reconstruction from Sparse views
MIT License
325 stars 16 forks source link

DTU Quantitative results #18

Closed jerryxu9905 closed 2 years ago

jerryxu9905 commented 2 years ago

Hi, I recently ran the training code in Generic mode and extract the mesh, then use the scripts https://github.com/jzhangbs/DTUeval-python to evaluate the Chamfer Distance for 15 test scenes according to the Table 1 of your paper. However I found the quantitative results I got are worse than those in Table 1, eg:

I use --mode val in command line to extract the mesh. In conf file I set the test_ref_view = [23]. In your paper I don't find the ref-view settings in experiment. So maybe you use other pairs.txt in Chamfer Distance test? Can you share some details in your experiment?

Thanks a lot.

flamehaze1115 commented 2 years ago

Hello. We use two pairs in each scene to do the evaluation. Before calculating the chamfer distance for all the neural-based methods, we use the input image masks to clean the reconstruction results. This is because for all the neural-based methods, there will be many free surfaces in the background parts. Directly calculating the chamfer distance with ground truth point cloud and comparing it with the SOTA methods doesn't make much sense.

The evaluation input image pairs we used can be downloaded here: https://connecthkuhk-my.sharepoint.com/:u:/g/personal/xxlong_connect_hku_hk/EU22HEv48nRLnnnliRvJNA0BILozsMLbhsnMQh1WZLY5kg?e=Lh7kWM

I will upload the full evaluation code this week.

jerryxu9905 commented 2 years ago

Hello. We use two pairs in each scene to do the evaluation. Before calculating the chamfer distance for all the neural-based methods, we use the input image masks to clean the reconstruction results. This is because for all the neural-based methods, there will be many free surfaces in the background parts. Directly calculating the chamfer distance with ground truth point cloud and comparing it with the SOTA methods doesn't make much sense.

The evaluation input image pairs we used can be downloaded here: https://connecthkuhk-my.sharepoint.com/:u:/g/personal/xxlong_connect_hku_hk/EU22HEv48nRLnnnliRvJNA0BILozsMLbhsnMQh1WZLY5kg?e=Lh7kWM

I will upload the full evaluation code this week.

Hi, @flamehaze1115 ,thanks for your reply.

luoxiaoxuan commented 2 years ago

@jerryxu9905 HI, guys, I meet the same problem. Did you solve it? The results of my run is bad, the details can be found in this. Can you offer some suggestions?