, since it seems like running with the from_pretrained flag on when using from_coco.yaml was only meant for training (inferring with that config gave variable predictions).
Running the mmf_run variant of the above command on validation gives a good AUROC (0.73 ish).
However, when we submit the test csv we've been getting AUROC scores on the order of ~0.3.., which seems rather odd. Is this designated behavior? Are we not using the right configs here? We've also tried training our own models from the from_coco.yaml as a starting point, but are also encountering low AUROC test scores despite high val scores. Highly suspecting that something weird is going on with the inference flow, but by inspection nothing seems to be clearly incorrect...
According to the docs under the Hateful Memes directory, I should be able to run
And output a reasonably perfomant csv for submission. Specifically, we are running the Visual Bert baselines with
, since it seems like running with the from_pretrained flag on when using from_coco.yaml was only meant for training (inferring with that config gave variable predictions).
Running the mmf_run variant of the above command on validation gives a good AUROC (0.73 ish). However, when we submit the test csv we've been getting AUROC scores on the order of ~0.3.., which seems rather odd. Is this designated behavior? Are we not using the right configs here? We've also tried training our own models from the from_coco.yaml as a starting point, but are also encountering low AUROC test scores despite high val scores. Highly suspecting that something weird is going on with the inference flow, but by inspection nothing seems to be clearly incorrect...