EvolvingLMMs-Lab / lmms-eval

Accelerating the development of large multimodal models (LMMs) with lmms-eval
https://lmms-lab.github.io/
Other
1.02k stars 52 forks source link

LLaVA Benchmark #81

Open yqy2001 opened 1 month ago

yqy2001 commented 1 month ago

Hi! I am confused about what is the difference between llava_bench_coco and llava_in_the_wild?

Besides, when will the llava_bench(wilder) used in LLaVA-NeXT be supported?

Thank you.

kcz358 commented 1 month ago

Hi, you can check the difference in https://arxiv.org/pdf/2304.08485

image

For llava-wilder, we have already implemented everything in our internal development, but some works need to be done before release. But definitely you can expect it to appear in lmms-eval in the future.

yqy2001 commented 1 month ago

Great work! Thank you for the response. Looking forward to the support of llava-wilder.