QwenLM / Qwen

The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
Apache License 2.0
13.59k stars 1.11k forks source link

微调后的模型,用vllm推理出来的结果都是空是什么原因呀 #1082

Closed lalalabobobo closed 5 months ago

lalalabobobo commented 7 months ago

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

当前行为 | Current Behavior

sft+vllm 推理为空

期望行为 | Expected Behavior

推理有结果

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

No response

github-actions[bot] commented 5 months ago

This issue has been automatically marked as inactive due to lack of recent activity. Should you believe it remains unresolved and warrants attention, kindly leave a comment on this thread. 此问题由于长期未有新进展而被系统自动标记为不活跃。如果您认为它仍有待解决,请在此帖下方留言以补充信息。

Mingfeng-Chen commented 5 months ago

遇到相同问题 image

jklj077 commented 5 months ago
  1. If you are using vLLM, it is not compatible with Qwen1.0. Consider FastChat+Qwen1.0, which includes the chat template and the stopping criteria by default.
  2. If you are using vLLM on long input sequences, it is not compatible with Qwen1.0 due to the different implementation of DynamicNTK. There is no workaround.
  3. If you are not using Chat models, it is possible that the model believes that the input is complete and generates "<|endoftext|>" (151643) directly.
  4. If you have run supervised finetuning (SFT) on base models, compare the saved *_config.json with those of chat models and modify as needed.

All in all, please try upgrading to Qwen1.5, as Qwen1.0 is not longer actively maintained.