prstrive / UniMVSNet

[CVPR 2022] Rethinking Depth Estimation for Multi-View Stereo: A Unified Representation
MIT License
228 stars 12 forks source link

The evaluation results on DTU evaluation set are different between paper and released checkpoints #21

Open YANG-SOBER opened 1 year ago

YANG-SOBER commented 1 year ago

Dear Rui Peng:

Thank you very much for your contribution and nice work.

I evaluate your released checkpoints "unimvsnet_dtu.ckpt" on the DTU evaluation set. (I do not change any parameters)

The results are: 0.4173 for mean accuracy and 0.2966 for mean completeness.

However, in the paper, the two metrics are 0.352 and 0.278, respectively.

May I know whether this released checkpoint is the one you used for the paper?

Thanks for your help.

Looking forward to your response.

YANG-SOBER commented 1 year ago

After set the align_corners=True in the F.grid_sample(), the result is 0.3685 (acc) and 0.2785 (comp), the other difference will be attributed to the gipuma (fusible). The compilation of which is dependent on the compute capability of the particular cuda and gpu version, please refer to https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/.

prstrive commented 1 year ago

The test results are indeed related to the environment, maybe you can try pytorch1.2.