X-PLUG / mPLUG-Owl

mPLUG-Owl: The Powerful Multi-modal Large Language Model Family
https://www.modelscope.cn/studios/damo/mPLUG-Owl
MIT License
2.25k stars 171 forks source link

How to make multiple convesation during one model inference? #76

Closed yuki9965 closed 1 year ago

yuki9965 commented 1 year ago

Hi, Thanks for your great work! In your model inference example, we use the prompts like [ '''The following is a conversation between a curious human and AI assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. Human: \<image> Human: Explain why this meme is funny. AI: ''']

but the conversion can only take once. I wander if the model can handle multiple conversation during one model inference? What the prompts need to be? I think the prompt maybe looks like :

[ '''The following is a conversation between a curious human and AI assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. Human: \<image> Human: Explain why this meme is funny. AI: Human: xxxx(another question) AI: ''']

But I cant get the expected results. Can you tell me the correct prompts? Thank you !

MAGAer13 commented 1 year ago

Yes, you are right.

yuki9965 commented 1 year ago

But it seems that the model only outputs the last question.

prompts = [ '''The following is a conversation between a curious human and AI assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. Human: \<image> Human: Explain why this meme is funny. AI: Human: What's the color of the floor? AI:''']

output: The floor in the image is blue.

MAGAer13 commented 1 year ago

Oh, I misunderstand your question. The prompt means the conversation history. So you need to add the history into prompt, and give a instruction to expect a output.