OpenGVLab / Multi-Modality-Arena

Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing images as inputs. Supports MiniGPT-4, LLaMA-Adapter V2, LLaVA, BLIP-2, and many more!
463 stars 35 forks source link

Submit users' own evaluation results to the benchmark #4

Closed aopolin-lv closed 1 year ago

aopolin-lv commented 1 year ago

Hello, authors. It is an amazing work. Do you consider about adding the function which support users submit their own evaluation results to the benchmark like C-eval?

BellXP commented 1 year ago

Thank you for your inquiry. The latest version of the code can be found in the LVLM_evaluation folder. This directory contains comprehensive evaluation code along with the necessary datasets. If you're interested in participating in the evaluation, please feel free to share your evaluation results or the model inference API with us via email at xupeng@pjlab.org.cn。

aopolin-lv commented 1 year ago

Thank you