megvii-research / KD-MVS

Code for ECCV2022 paper 'KD-MVS: Knowledge Distillation Based Self-supervised Learning for Multi-view Stereo'
MIT License
44 stars 0 forks source link

Too many parameters #5

Open Mizodesu opened 1 year ago

Mizodesu commented 1 year ago

Use your pre training model of 18M, but the final result of your training model is more than 1G。