EPFL-VILAB / MultiMAE

MultiMAE: Multi-modal Multi-task Masked Autoencoders, ECCV 2022
https://multimae.epfl.ch
Other
533 stars 61 forks source link

ask for pretraining model #10

Closed YananChen2021 closed 2 years ago

YananChen2021 commented 2 years ago

Hi~I don't have enough GPU to train the model. Where to down your pretaining model? I am looking forward to your reply.

roman-bachmann commented 2 years ago

Hi @YananChen2021,

You can download our pre-trained models from here: https://github.com/EPFL-VILAB/MultiMAE#pre-trained-models. See the download links inside the first table.

Best, Roman