X-PLUG / mPLUG-Owl

mPLUG-Owl: The Powerful Multi-modal Large Language Model Family
https://www.modelscope.cn/studios/damo/mPLUG-Owl
MIT License
2.33k stars 176 forks source link

Lower text generation quality with batch size > 1 in mPLUG-owl inference: Seeking insights on possible causes #85

Closed pyogher closed 1 year ago

pyogher commented 1 year ago

Hi,

I have noticed that when performing inference with a batch size greater than 1 using mPLUG-owl, the quality of the generated text is significantly worse compared to when using a batch size of 1. I have thoroughly reviewed the code and couldn't find any potential factors that could cause this issue.

Could you please provide insights into the possible reasons behind this performance difference when using different batch sizes in mPLUG-owl's inference? I would greatly appreciate any suggestions or explanations you can provide.

MAGAer13 commented 1 year ago

We only support batch_size=1 currently.

pyogher commented 1 year ago

Thanks!