EvolvingLMMs-Lab / lmms-eval

Accelerating the development of large multimodal models (LMMs) with lmms-eval
https://lmms-lab.github.io/
Other
1.37k stars 107 forks source link

Adding InfiniBench: A Comprehensive Benchmark for Large Multimodal Models in Very Long Video Understanding #172

Open KerolosAtef opened 1 month ago

KerolosAtef commented 1 month ago

Hello team, Thank you for this great work for video evaluation, could you add my new benchmark to the evaluation benchmarks

[Project Page] [Code] [📝 arXiv Paper] [🤗 Dataset]

Let me know if you need any help from my side.

Thanks in advance

jzhang38 commented 1 month ago

Hi Kirolos congrats on the release! You can open a PR and we can then review & merge.

KerolosAtef commented 1 month ago

Is there any tutorial for how to add new benchmark to your framework

kcz358 commented 1 month ago

Hi @KerolosAtef , you can refer to the docs for more detail. If you are doing video tasks, you can check tasks such as worldqa,videomme,nextqa