ymcui / Chinese-LLaMA-Alpaca-2

中文LLaMA-2 & Alpaca-2大模型二期项目 + 64K超长上下文模型 (Chinese LLaMA-2 & Alpaca-2 LLMs with 64K long context models)
Apache License 2.0
7.05k stars 577 forks source link

v100显卡, 开启vllm与不开启, 相同prompt下, 结果不一致 #344

Closed neal668 closed 10 months ago

neal668 commented 11 months ago

提交前必须检查以下项目

问题类型

模型推理

基础模型

Chinese-Alpaca-2-16K (7B/13B)

操作系统

Linux

详细描述问题

分别执行以下代码, 通过交互方式输入内容
python3 scripts/inference/inference_hf.py --base_model chinese-alpaca-2-lora-13b --with_prompt --interactive --use_vllm
和
python3 scripts/inference/inference_hf.py --base_model chinese-alpaca-2-lora-13b --with_prompt --interactive

对inference_hf.py中, 第109行做了改动, 新增dtype='float16',

依赖情况(代码类问题务必提供)

# 请在此处粘贴依赖情况(请粘贴在本代码块里)

运行日志或截图

# 请在此处粘贴运行日志(请粘贴在本代码块里)
iMountTai commented 11 months ago

--base_model输入的是lora还是合并后的模型?如果是合并后的模型,是自己合并的还是直接下载的全量权重

Lyu6PosHao commented 11 months ago

可能是解码策略的问题,generation_config

github-actions[bot] commented 10 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.

github-actions[bot] commented 10 months ago

Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.

tkone2018 commented 9 months ago

@neal668 你好,请问你解决了吗