Q-Future / Q-Bench

①[ICLR2024 Spotlight] (GPT-4V/Gemini-Pro/Qwen-VL-Plus+16 OS MLLMs) A benchmark for multi-modality LLMs (MLLMs) on low-level vision and visual quality assessment.
https://q-future.github.io/Q-Bench/
Other
251 stars 12 forks source link

Question about model version. #5

Closed Yangr116 closed 1 year ago

Yangr116 commented 1 year ago

Great work! I would like to know which llava model do you use in the experiment. Is the base model Vicuna-13B-v1.3, Vicuna-13B-v1.1, or Vicuna-13B-v0? image image

teowu commented 1 year ago

Hi Rui!

For LLaVA-v1, we use the Vicuna-13B-v1.3, i.e. https://huggingface.co/liuhaotian/llava-v1-0719-336px-lora-vicuna-13b-v1.3.

It was the best and most popular LLaVA version when we initiated the Q-Bench project, and we also further evaluated LLaVA-v1.5 as soon as it released.

Best Haoning