jayleicn / singularity

[ACL 2023] Official PyTorch code for Singularity model in "Revealing Single Frame Bias for Video-and-Language Learning"
https://arxiv.org/abs/2206.03428
MIT License
129 stars 13 forks source link

High MLM probability during pretraining #1

Closed vateye closed 2 years ago

vateye commented 2 years ago

Hi, I found that you use 0.5 probability when using MLM? Why not using 0.15 during MLM? I am so curious about the setting.

jayleicn commented 2 years ago

0.15 might be good for text only pre-training as in BERT, but since in cross-model pre-training, image/video is used as additional context, it might be better to increase this ratio. 0.5 is the best according to our pilot study.