AILab-CVC / SEED-Bench

(CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.
Other
276 stars 8 forks source link

Question on how task27 generates images #19

Open JunZhan2000 opened 5 months ago

JunZhan2000 commented 5 months ago

Hello. I saw this description in your paper.

Evaluation of text and image output. We first employ an answer ranking strategy to select the most likely text prediction. If it matches the ground truth, we evaluate the image output using the CLIP similarity score [50] between the generated image and each candidate. The model is deemed correct only if both text and image predictions match the ground truth.

I'm a little confused. How do I generate the image, first spell the corresponding text answer and then generate the image, or generate the image directly from the question? Thanks for your work!

geyuying commented 5 months ago

Given the question, we first evaluate text generation by answer ranking strategy. Specifically, for each choice of a question, we compute the likelihood that an MLLM generates the textual content of this choice given the question. We select the choice with the highest likelihood as model’s prediction for text generation.

If the text prediction matches the groundtruth, we then evaluate the image generation. Given the question, the model will generate texts and image directly. We evaluate the image output using the CLIP similarity score between the generated image and each candidate image. The model is deemed correct only if both text and image predictions match the ground truth.

JunZhan2000 commented 5 months ago

Given the question, we first evaluate text generation by answer ranking strategy. Specifically, for each choice of a question, we compute the likelihood that an MLLM generates the textual content of this choice given the question. We select the choice with the highest likelihood as model’s prediction for text generation.

If the text prediction matches the groundtruth, we then evaluate the image generation. Given the question, the model will generate texts and image directly. We evaluate the image output using the CLIP similarity score between the generated image and each candidate image. The model is deemed correct only if both text and image predictions match the ground truth.

Thanks for your reply, I have some more questions

  1. Is it allowed to add additional prompts?
  2. If no image is generated, is the generation failed? Can the image be forced to be generated?