e-bug / volta

[TACL 2021] Code and data for the framework in "Multimodal Pretraining Unmasked: A Meta-Analysis and a Unified Framework of Vision-and-Language BERTs"
https://aclanthology.org/2021.tacl-1.58/
MIT License
114 stars 24 forks source link

Is it possible to release pretrained model weights (without downstream task fine-tuned)? #9

Closed Mallory24 closed 3 years ago

Mallory24 commented 3 years ago

Hello! Thanks for your great repository. I saw you only release the already fine-tuned V+L models, is it possible to distribute pretrained model weights only for the 5 models in your controlled set-up?

Thanks in advance!

e-bug commented 3 years ago

Hi, The currently released weights are actually after pretraining!

The values in the table are the expected performance when you fine-tune them. Sorry for the confusion.

Mallory24 commented 3 years ago

Hi, Thanks for the clarification!