Closed ChenyuGAO-CS closed 1 year ago
Hi, We didn't do task specific pretraining on VisualBERT for our paper https://arxiv.org/abs/2004.08744 as our experiments suggested it wasn't as beneficial as the time required to do it. But, you can do it on your own in MMF.
in the model zoo, we have
❓ Questions and Help
Hi, I found there are several models' links in models.yaml, which one corresponds to the Task-specific Pre-training (on VQA 2.0) model of VisualBert, ie, after COCO pretrain and then VQA pretrain? Is there an accuracy report of three versions of VisualBert on VQA 2.0 under MMF?