Open JunZhan2000 opened 5 months ago
Given the question, we first evaluate text generation by answer ranking strategy. Specifically, for each choice of a question, we compute the likelihood that an MLLM generates the textual content of this choice given the question. We select the choice with the highest likelihood as model’s prediction for text generation.
If the text prediction matches the groundtruth, we then evaluate the image generation. Given the question, the model will generate texts and image directly. We evaluate the image output using the CLIP similarity score between the generated image and each candidate image. The model is deemed correct only if both text and image predictions match the ground truth.
Given the question, we first evaluate text generation by answer ranking strategy. Specifically, for each choice of a question, we compute the likelihood that an MLLM generates the textual content of this choice given the question. We select the choice with the highest likelihood as model’s prediction for text generation.
If the text prediction matches the groundtruth, we then evaluate the image generation. Given the question, the model will generate texts and image directly. We evaluate the image output using the CLIP similarity score between the generated image and each candidate image. The model is deemed correct only if both text and image predictions match the ground truth.
Thanks for your reply, I have some more questions
Hello. I saw this description in your paper.
I'm a little confused. How do I generate the image, first spell the corresponding text answer and then generate the image, or generate the image directly from the question? Thanks for your work!