InternLM / InternLM-XComposer

InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output
Apache License 2.0
2.45k stars 150 forks source link

模型推理性能优化 #91

Open will-wiki opened 9 months ago

will-wiki commented 9 months ago

感谢博主开源~ 最近试用了InternLM-XComposer-VL-7b模型,效果很棒,就是推理的速度有点慢,目前使用V100进行推理,显存26G,耗时10s/条,想问下模型有什么推荐的推理加速方法么,还望博主给点建议

另外还尝试了internlm/internlm-xcomposer-7b-4bit模型,相同机器环境,显存从26G降到20G,耗时翻倍20s/条,不知道是不是我哪里设置的不对,推理上变慢了很多,这是为什么呢

以下是我使用的环境: 机器:V100 torch:2.1 cuda:11.8 python:3.9

myownskyW7 commented 9 months ago

Thank you for submitting your request. To assist you better, could you please provide us with the code you are currently running along with the command you used? We will review it as soon as possible.

will-wiki commented 9 months ago

@myownskyW7 使用仓库中的examples/example_chat_4bit.py和examples/example_chat.py代码测试的,一个prompt预测多张图片计算得到的平均推理耗时

will-wiki commented 9 months ago

@myownskyW7 同学你好,想问下这个问题有结论了吗?4bit量化应该是性能会提升,这样有点奇怪了

LianghuiGuo commented 8 months ago

同问,InternLM-XComposer-VL-7b模型,A100单卡推理,耗时5s/条。有什么优化方案么

hyyuan123 commented 7 months ago

感谢博主开源~ 最近试用了InternLM-XComposer-VL-7b模型,效果很棒,就是推理的速度有点慢,目前使用V100进行推理,显存26G,耗时10s/条,想问下模型有什么推荐的推理加速方法么,还望博主给点建议

另外还尝试了internlm/internlm-xcomposer-7b-4bit模型,相同机器环境,显存从26G降到20G,耗时翻倍20s/条,不知道是不是我哪里设置的不对,推理上变慢了很多,这是为什么呢

以下是我使用的环境: 机器:V100 torch:2.1 cuda:11.8 python:3.9

您好,打扰一下,我是用博主开源的example/demo_chat.py文件,同样使用V100,为啥一直显示CUDA不足,博主可能修改文件了,您能分享一下examples/example_chat_4bit.py和examples/example_chat.py这两个文件吗,感谢

hyyuan123 commented 7 months ago

Thank you for submitting your request. To assist you better, could you please provide us with the code you are currently running along with the command you used? We will review it as soon as possible.

您好,我是用V100运行example/gradio_demo_chat.py文件,但是一直显示CUDA不足,这个程序运行需要多大的内存环境配置?我看有人可以使用V100卡运行成功,是什么问题导致的?