e-bug / volta

[TACL 2021] Code and data for the framework in "Multimodal Pretraining Unmasked: A Meta-Analysis and a Unified Framework of Vision-and-Language BERTs"
https://aclanthology.org/2021.tacl-1.58/
MIT License
114 stars 24 forks source link

Finetuning models on SNLI-VE #7

Closed jaweriah closed 3 years ago

jaweriah commented 3 years ago

Hi,

To finetune the pre-trained models(ctrl) for the snli-ve dataset, all I need to do is to run the training script in the examples? Is my understanding correct? Or there are some other changes that need to be made?

Thanks!

e-bug commented 3 years ago

Hi,

Yes, you can look at the example for ViLBERT.

You can of course specify a different architecture and hyperparameters (see also the ones in the config_tasks).

jaweriah commented 3 years ago

Yes, I am following the example of ViLBERT and using that script for other models.

Thanks for your response :)