e-bug / volta

[TACL 2021] Code and data for the framework in "Multimodal Pretraining Unmasked: A Meta-Analysis and a Unified Framework of Vision-and-Language BERTs"
https://aclanthology.org/2021.tacl-1.58/
MIT License
114 stars 24 forks source link

Is it possible to share the F1 score or accuracy of pre-training the models in their control setting in the image sentence alignment objective? #21

Closed kaiweicen closed 2 years ago

kaiweicen commented 2 years ago

Hi, thanks to the great repository! Is it possible to post the F1 score or accuracy of pre-training the models in their control setting in the image sentence alignment objective? I don't see these information in the VOLTA paper.

e-bug commented 2 years ago

Hi,

Can you clarify what is your goal?

Also, you can run any analysis on all the models pretrained in the control setup (cf. here)