TRI-ML / vlm-evaluation

VLM Evaluation: Benchmark for VLMs, spanning text generation tasks from VQA to Captioning
Other
77 stars 10 forks source link

Add in AI2D Eval #5

Closed ashwin-balakrishna96 closed 6 months ago

ashwin-balakrishna96 commented 6 months ago

This eval closely follows the TallyQA eval. All questions are MC questions with 4 choices. The results look generally reasonable to me: especially given the results in the Qwen-VL paper (They report between 57.7 and 62.3 on the test set, with Pix2Struct-Large getting 42.1). PaLI gets 81.4 on the test set. => AI2D-val-Accuracy Accuracy (Official): 0.543
=> AI2D-val-AUCROC Accuracy (Official): 0.783
=> AI2D-val-AUCPR Accuracy (Official): 0.614
=> AI2D-test-Accuracy Accuracy (Official): 0.542
=> AI2D-test-AUCROC Accuracy (Official): 0.799
=> AI2D-test-AUCPR Accuracy (Official): 0.635
=> AI2D-final-Accuracy Accuracy (Official): 0.542
=> AI2D-final-AUCROC Accuracy (Official): 0.790
=> AI2D-final-AUCPR Accuracy (Official): 0.622