MMStar-Benchmark / MMStar

This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"
https://mmstar-benchmark.github.io
138 stars 2 forks source link

Great work #3

Closed gordonhu608 closed 5 months ago

gordonhu608 commented 5 months ago

Thanks for the great work! This is something this community is needing imperatively. Especially the discussion of MG and ML metric.

xiaoachen98 commented 5 months ago

Thanks for the great work! This is something this community is needing imperatively. Especially the discussion of MG and ML metric.

Thank you for recognizing our work! We will make more efforts to evaluate LVLM more rationally in subsequent editions.