JeffWang987 / MVSTER

[ECCV 2022] MVSTER: Epipolar Transformer for Efficient Multi-View Stereo
MIT License
189 stars 13 forks source link

question about pretrained model #10

Closed hakeemray closed 2 years ago

hakeemray commented 2 years ago

Thanks your code,when i use your pretrained model in test_mvs4.py,there are some questions,such as : size mismatch for reg.0.conv0.conv.weight: copying a param with shape torch.Size([8, 8, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([8, 64, 1, 3, 3]). size mismatch for reg.1.conv0.conv.weight: copying a param with shape torch.Size([8, 8, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([8, 32, 1, 3, 3]). size mismatch for reg.2.conv0.conv.weight: copying a param with shape torch.Size([8, 4, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([8, 16, 1, 3, 3]). size mismatch for reg.3.conv0.conv.weight: copying a param with shape torch.Size([8, 4, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([8, 8, 1, 3, 3]). can you give me some advice to edit your code ? Thanks!

JeffWang987 commented 2 years ago

The default number of depth candidates in each stage are: [8, 8, 4, 4], which is the same in our pretrained model. If you want to use setting: depth candidates=[64, 32, 16, 8], you should train from scratch.

JeffWang987 commented 2 years ago

For your convenience, the training command is: bash ./scripts/train_dtu.sh mid exp_name --ndepths 64 32 16 8 For better performance, you should also try to fine-tune the 'depth_inter_r' when you change 'ndepths'.