EvolvingLMMs-Lab / lmms-eval

Accelerating the development of large multimodal models (LMMs) with lmms-eval
https://lmms-lab.github.io/
Other
1.69k stars 133 forks source link

Reproduction results of llava v1.5 in MMBench CN #139

Open xzhouzeng opened 3 months ago

xzhouzeng commented 3 months ago

Hello, I tried to replicate the experimental results of LLaVA-1.5-7B on MMBench CN, and the final test result was only 55.756, which is slightly different from the document's (58.3 and 57.62). What modifications do I need to make? image

fmm170 commented 2 weeks ago

I also encountered the same problem, and my result was 55.756. How to solve it?