Open fujitomi opened 3 years ago
Hi, the depth map fusion and refinement step are quite important for T&T benchmarking.
I would suggest you implement the depth map fusion step exactly as described in MVSNet paper (this part is not released as it depends on Altizure internal library), or you could try some other alternatives (MVSNet_pytorch, Vis-MVSNet, D2HC-MVSNet)
The depth map refinement step is also important but not easy to implement. However, from recent R-MVSNet-like methods (D2HC-MVSNet), you could still achieve high-quality results without applying this part.
@YoYo000 Hello. Now, I am trying to reproduce the T&T benchmark results with R-MVSNet, but still producing quite worse results than that of described in your paper. The produced intermediate F-score is approximately 30. I know that the complete reproduction needs refinement and fusion process.
details are below.
Network without refinement input image size: 1920x1080 feature extraction: UNetDS2GN sampling: inverse depth
Training provided dtu datasets max_d: 128 views: 3 Sampled from 425 to 937
Inference using provided DEPTH_MIN, DEPTH_MAX and DEPTH_NUM views: 5 using fusibile that you modified prob_threshold: 0.3 (also tried 0.1 and 0.2) disp_threshold: 0.25 num_consistent: 4
and also, I found fixed DEPTH_NUM: 256, not changed DEPTH_MIN and DEPTH_MAX, produced much closer F-score.
If you have time, I'd really appreciate it if you could indicate the differences between my experiments and yours.