megvii-research / KD-MVS

Code for ECCV2022 paper 'KD-MVS: Knowledge Distillation Based Self-supervised Learning for Multi-view Stereo'
MIT License
44 stars 0 forks source link

About the performance of the teacher model (i.e. ,Self-supervised Teacher Training) #1

Closed jzhu-tju closed 2 years ago

jzhu-tju commented 2 years ago

Amazing Work! After reading your paper, I have a little question. Table 5 shows the performance of your self-supervised teacher training when using MVSNet as the backbone, would you like to tell me the performance on DTU using CasMVSNet when only carrying out the self-supervised teacher training. Thanks, looking forward for your kind reply!

DingYikang commented 2 years ago

Hi, thank you for your interest in our work. Actually, the reported results in Tab.5 are obtained by applying self-supervised teacher training on CasMVSNet (not on MVSNet). Hope this can help you.

jzhu-tju commented 2 years ago

Thanks for your reply! As shown in U-MVS(ICCV 2021), the performance when only applying the photometric loss on CasMVSNet is 0.4041 and when I reproduce in the same setting, the performance is even higer (about 0.39). In your paper, the performance obtained by only applying the photometric loss on CasMVSNet is 0.495, which is much different with the above.

Is there any difference in experimental settings? Looking forward to your reply. Many thanks!

DingYikang commented 2 years ago

Hi, thank you for your feedback. In our experiments, we didn't follow the settings of U-MVS, so the results may look different. Several factors may affect the final results (e.g., fusion method, fusion threshold, etc). In the ablation experiments in Tab.5, we guarantee the settings are fair to illustrate the effectiveness of different featuremetric loss. Hope this can help you.

jzhu-tju commented 2 years ago

Sincerely Thanks!