megvii-research / TransMVSNet

(CVPR 2022) TransMVSNet: Global Context-aware Multi-view Stereo Network with Transformers.
MIT License
262 stars 25 forks source link

trained on DTU , DTU evaluation results doesn't match paper metric. #38

Open AsDeadAsADodo opened 8 months ago

AsDeadAsADodo commented 8 months ago
- Acc. Comp. Overall
paper 0.321 0.289 0.305
reproduced 0.364 0.275 0.3195

Only trained on DTU dataset with batchsize set to 1 and no fine-tuning.

wtyuan96 commented 8 months ago

Maybe you can select the best model according to the results of validation set.

AsDeadAsADodo commented 8 months ago

Maybe you can select the best model according to the results of validation set.

Thanks for replying. It turns out this issue was brought before https://github.com/megvii-research/TransMVSNet/issues/13 The results I showed above based on gipuma method, I'll try normal fusion.

AsDeadAsADodo commented 8 months ago

Best model of all epoches, switched to normal fusion got this result.

The ply files size goes to 7.4GB from 6.4GB - Acc. Comp. Overall
paper 0.321 0.289 0.305
reproduced Gipuma 0.364 0.275 0.3195
reproduced Normal 0.3474 0.3206 0.334
pretrained Gipuma 0.3462 0.2621 0.304
pretrained Normal 0.328 0.3013 0.31465

Edit: add pretrained results

Gwencong commented 7 months ago

Best model of all epoches, switched to normal fusion got this result.

The ply files size goes to 7.4GB from 6.4GB

  • Acc. Comp. Overall paper 0.321 0.289 0.305 reproduced Gipuma 0.364 0.275 0.3195 reproduced Normal 0.3474 0.3206 0.334 pretrained Gipuma 0.3462 0.2621 0.304 pretrained Normal 0.328 0.3013 0.31465 Edit: add pretrained results

I have encountered the same problem. Have you successfully reproduced the paper results? My reproduced results are the same as yours,the overall is 0.319

AsDeadAsADodo commented 7 months ago

Best model of all epoches, switched to normal fusion got this result. The ply files size goes to 7.4GB from 6.4GB

  • Acc. Comp. Overall paper 0.321 0.289 0.305 reproduced Gipuma 0.364 0.275 0.3195 reproduced Normal 0.3474 0.3206 0.334 pretrained Gipuma 0.3462 0.2621 0.304 pretrained Normal 0.328 0.3013 0.31465 Edit: add pretrained results

I have encountered the same problem. Have you successfully reproduced the paper results? My reproduced results are the same as yours,the overall is 0.319

No, by now I can only presume that's the hardwares fault. I've confirmed the environment with Yikang Ding. What GPU you trained on?

Gwencong commented 7 months ago

Best model of all epoches, switched to normal fusion got this result. The ply files size goes to 7.4GB from 6.4GB

  • Acc. Comp. Overall paper 0.321 0.289 0.305 reproduced Gipuma 0.364 0.275 0.3195 reproduced Normal 0.3474 0.3206 0.334 pretrained Gipuma 0.3462 0.2621 0.304 pretrained Normal 0.328 0.3013 0.31465 Edit: add pretrained results

I have encountered the same problem. Have you successfully reproduced the paper results? My reproduced results are the same as yours,the overall is 0.319

No, by now I can only presume that's the hardwares fault. I've confirmed the environment with Yikang Ding. What GPU you trained on?

I am using RTX 3090 for training