yhw-yhw / D2HC-RMVSNet

The official repository of the paper "Dense Hybrid Recurrent Multi-view Stereo Net with Dynamic Consistency Checking" (ECCV2020 Spotlight)
MIT License
109 stars 9 forks source link

Dynamic consistency fusion #3

Closed kwea123 closed 3 years ago

kwea123 commented 3 years ago

From table 3 it seems dynamic consistency fusion increases the score by 1.65 which is a lot, so I want to test it on other methods to see if it consistently increases the score for any kind of depth prediction result. Have you tried to apply this module on other methods such as MVSNet? Should be just a drop-in replacement.

In your code however, I cannot find what is written in the paper. There are two fusion functions, one in eval.py https://github.com/yhw-yhw/D2HC-RMVSNet/blob/7ebeb16bfd1f013aeaa5c72ac89a0605f8182309/eval.py#L268-L283 which is the traditional way to fuse. The other is in fusion.py https://github.com/yhw-yhw/D2HC-RMVSNet/blob/7ebeb16bfd1f013aeaa5c72ac89a0605f8182309/fusion.py#L192-L214 which is different, and something I don't understand.

Anyway, they don't do what's described in the paper, so I wonder what's your final implementation to get the good results, and how is that different from the paper.

weizizhuang commented 3 years ago

You can try the fusion method in fusion.py to generate point cloud, it will be better than the traditional way. In fusion.py, we just follow the idea of dynamic consistency checking (estimated depth value is accurate and reliable when it has a very low reprojection error in few views, or a lower error in majority view) but implement it in a more direct way. The different implementation with tuned parameters will be a little better on Tanks and Temples.