hiyouga / LLaMA-Factory

Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
https://arxiv.org/abs/2403.13372
Apache License 2.0
34.08k stars 4.2k forks source link

使用llamafactory-cli api启动Qwen/Qwen2-7B-Instruct回答乱码 #4226

Closed derrickcyt closed 5 months ago

derrickcyt commented 5 months ago

Reminder

System Info

centos 7 NVIDIA-SMI 450.156.00 Driver Version: 450.156.00 CUDA Version: 11.8 Python 3.8.12

torch 2.0.0 transformers 4.41.2

Reproduction

CUDA_VISIBLE_DEVICES=0 API_PORT=8081 llamafactory-cli api examples/inference/qwen2_7B.yaml

Expected behavior

No response

Others

No response

onlyjokers commented 5 months ago

我这里也是

hiyouga commented 5 months ago

我这里看没有问题,检查你的模型文件是否正确:https://huggingface.co/Qwen/Qwen2-7B-Instruct/tree/main

image