Totoro97 / NeuS

Code release for NeuS
MIT License
1.55k stars 210 forks source link

Unable to reproduce DTU results #94

Open ishanic opened 1 year ago

ishanic commented 1 year ago

Hello Peng,

Thanks much for sharing this excellent work. I had a question regarding reproducing the DTU results. I compared the mesh distance of your pretrained model from groundtruth with the distance from the model I trained using this codebase. The numbers are significantly different. In particular, I am looking at sequence 24. To render the mesh and metrics, I used Data/pretrained_DTU.zip to get the latest checkpoint and used the following commands. python exp_runner.py --mode validate_mesh --conf ./confs/wmask.conf --case dtu_scan24 --is_continue Next, I cleaned the mesh using the clean_mesh.py script python clean_mesh.py and then computed the mesh distance using the Python script shared in https://github.com/jzhangbs/DTUeval-python python eval.py --data meshes_clean/024.ply --scan 24 --mode mesh --dataset_dir SampleSet_Points/ --vis_out_dir visualize/

I followed a similar sequence, except I also trained the model on the data shared in data_DTU.zip instead of using your checkpoint. python exp_runner.py --mode train --conf ./confs/wmask.conf --case dtu_scan24 Unfortunately the Chamfer distances are very different.

The results from your pretrained checkpoints are: mean_d2s=0.90, mean_s2d=0.75, over_all=0.82 While the results from retraining using the same confs etc. gives much worse results: mean_d2s=1.03, mean_s2d=0.87, over_all=0.95

I am also sharing the visualized diff maps, if that helps. The first image is rendered mesh from your model, the second image is from the model trained by me using the same confs as the codebase. Is there a way to debug this issue. How do I reproduce the models as closely as yours.

Thanks!

rendered trained
Terry10086 commented 1 year ago

I tested dataset scan65, and it was also higher (about 0.71, 0.51, avg: 0.61) than it should be(0.59).