jianzhnie / LLamaTuner

Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.
https://jianzhnie.github.io/llmtech/
Apache License 2.0
569 stars 63 forks source link

下载了百川7b模型后,直接在gradio_webserver.py里推理,生成内容乱码问题 #78

Closed FDwangchao closed 1 year ago

FDwangchao commented 1 year ago

image 你好,请教一下,如上图,下载了百川7b模型后,直接在gradio_webserver.py里推理,生成内容乱码问题

FDwangchao commented 1 year ago

另外 日志里 还报这个错, next_tokens = torch.multinomial(probs, num_samples=2 * num_beams) RuntimeError: probability tensor contains either inf, nan or element < 0

FDwangchao commented 1 year ago

百川7b是原始的模型,然后这样执行的 python gradio_webserver.py --model_name_or_path /models/Baichuan-7B

FDwangchao commented 1 year ago

这是来自QQ邮箱的假期自动回复邮件。   您好,您的邮件我已收到,我稍后会给您回复的~~