Closed zycheiheihei closed 1 year ago
Hi @zycheiheihei .
For now, only MiniGPT-4 is supported with MME dataset. I wonder if it's possible to further support MLLMs like Instruct Blip and LLaVA.
Since all our MLLMs in opencompass follows a similar evaluation pipeline, just replace the model
of mini-gpt4 in MME config with model of llava or instructblip and implement a LLaVAMMEPromptConstructor
following the one of MiniGPT-4 could do the job.
I also have a problem about the MME Prompt Constructor. Where can I find a reference to set up the MME prompt constructor by myself. I assume prompt design has quite big impact on the benchmarking results.
As shown in the config, the prompt constructor is imported in this line https://github.com/InternLM/opencompass/blob/c26ecdb1b05baea7bcf34c99ea245ae68a3ada83/configs/multimodal/minigpt_4/minigpt_4_7b_mme.py#L1
And the class is defined here: https://github.com/InternLM/opencompass/blob/c26ecdb1b05baea7bcf34c99ea245ae68a3ada83/opencompass/multimodal/models/minigpt_4/prompt_constructor.py#L143-L156
We supported custom system_prompt
and reply_prompt
for now. And also you could implement your own constructor here. Don't forget to register it in __init__.py
before import.
Hope it helps. Feel free to ask if anything is unclear.
Thanks for your quick reply!
Describe the feature
For now, only MiniGPT-4 is supported with MME dataset. I wonder if it's possible to further support MLLMs like Instruct Blip and LLaVA.
I also have a problem about the MME Prompt Constructor. Where can I find a reference to set up the MME prompt constructor by myself. I assume prompt design has quite big impact on the benchmarking results.
Will you implement it?