BradyFU / Awesome-Multimodal-Large-Language-Models

:sparkles::sparkles:Latest Advances on Multimodal Large Language Models
11.59k stars 750 forks source link

Request to add MoE-LLaVA-2.7B×4 into the MME benchmark #116

Closed LinB203 closed 7 months ago

LinB203 commented 7 months ago

Hi,

Thanks for the efforts in building the MME benchmark!

I request to add our MoE-LLaVA-2.7B×4 to the MME benchmark.

Title: "MoE-LLaVA: Mixture of Experts for Large Vision-Language Models" Paper: https://arxiv.org/abs/2401.15947 Code: https://github.com/PKU-YuanGroup/MoE-LLaVA Evaluation Codes: https://github.com/PKU-YuanGroup/MoE-LLaVA/blob/main/moellava/eval/model_vqa_loader.py Results: 1706973470621

Thank you very much!

BradyFU commented 7 months ago

Copy that! dear binbin : )