Closed LinB203 closed 10 months ago
Hi,
Thanks for the efforts in building the MME benchmark!
I request to add our MoE-LLaVA-2.7B×4 to the MME benchmark.
Title: "MoE-LLaVA: Mixture of Experts for Large Vision-Language Models" Paper: https://arxiv.org/abs/2401.15947 Code: https://github.com/PKU-YuanGroup/MoE-LLaVA Evaluation Codes: https://github.com/PKU-YuanGroup/MoE-LLaVA/blob/main/moellava/eval/model_vqa_loader.py Results:
Thank you very much!
Copy that! dear binbin : )
Hi,
Thanks for the efforts in building the MME benchmark!
I request to add our MoE-LLaVA-2.7B×4 to the MME benchmark.
Title: "MoE-LLaVA: Mixture of Experts for Large Vision-Language Models" Paper: https://arxiv.org/abs/2401.15947 Code: https://github.com/PKU-YuanGroup/MoE-LLaVA Evaluation Codes: https://github.com/PKU-YuanGroup/MoE-LLaVA/blob/main/moellava/eval/model_vqa_loader.py Results:
Thank you very much!