QwenLM / Qwen

The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
Apache License 2.0
13.59k stars 1.11k forks source link

[BUG] <title> Qwen的推理速度过于慢了 #977

Closed zrLian closed 6 months ago

zrLian commented 8 months ago

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

当前行为 | Current Behavior

Qwen在10w数据上微调只需要40分钟,然而需要8小时才能生成不到1w条文本。

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

No response

jklj077 commented 8 months ago

训练 vs 推理速度

没啥好比的,训练跟推理,模型执行的代码都不一样的,推理需要走自回归生成,本来比训练的就相对慢。

在你什么都不说的情况下,我只能推荐你直接上FastChat + vLLM + OpenAI API,并发请求,打满吞吐。