Open nemonameless opened 9 months ago
Thank you very much for your interest in our work. And I have made the necessary modifications. Currently, there are 509 occurrences of the "\<img>" character, which indicates the position of the corresponding images in the $L_2$ problems. For In-Context Captioning, there are a total of 120 problems with 3 images per problem, resulting in 360 images. Meanwhile, Interleaved Image-Text Analysis consists of 49 problems with a total of 149 images.
Just fixed [img] -> <img>
?but have not solved the question I asked.
"question": "<img>: The predominant color of the uniforms worn by the players is blue. <img>: The most notable color present in the woman's outfit is orange. <img>:",
still no question after the last <img>
Please fix it. Thanks.
In fact, this format is specifically designed by us to address such issues. As mentioned in our paper, "Part-2 evaluates MLLMs' comprehension of arbitrary interleaved image-text inputs, including In-Context Captioning. In this task, two examples of image-caption pairs along with an image are provided, and the model is expected to describe the specific aspect of the image." For more details, please refer to Section 3.2.2 of our paper on SEED-Bench-2.
Could you please give a prompt or question? We just want to add a question after the last <img>
for our model testing.
For example:
please select the description below that best describes the last image.
which of the following options provides the same type of description for the last picture?
...
Hi, following the few-shot setting of Flamingo[1], we do not provide a specific prompt for evaluating in-context captioning. Sine we adopt PPL as the evaluation metric, it may not be necessary to add a question for model testing.
[1] Flamingo: a Visual Language Model for Few-Shot Learning
SEED-Bench_v2_level1_2_3.json example:
there are 360 questions end with this style
<img>:"
. Did you put the wrong data?