Tele-AI / Telechat

1.67k stars 85 forks source link

telechat_infer_demo.py中base模型直接续写演示输出结果有问题 #28

Open hzhaoy opened 3 months ago

hzhaoy commented 3 months ago

操作系统:Ubuntu 22.04.3 LTS GPU:NVIDIA A100-SXM4-80GB CUDA:Driver Version: 535.154.05 CUDA Version: 12.2

代码复现

import os
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig

os.environ["CUDA_VISIBLE_DEVICES"] = '0'
PATH = '/models/Tele-AI/telechat-7B/'

tokenizer = AutoTokenizer.from_pretrained(PATH)
model = AutoModelForCausalLM.from_pretrained(PATH, trust_remote_code=True, device_map="auto",
                                                 torch_dtype=torch.float16)
generate_config = GenerationConfig.from_pretrained(PATH)
model.eval()

inputs = "hello"
print("输入:", inputs)
output = model.generate(**tokenizer(inputs, return_tensors="pt").to(model.device),
                            generation_config=generate_config)
output = tokenizer.decode(output[0])
print("续写结果:", output)

inputs = "你是"
output = model.generate(**tokenizer(inputs, return_tensors="pt").to(model.device),
                             generation_config=generate_config)
output = tokenizer.decode(output[0])
print("续写结果:", output)

输出结果: 23959 13372

liuxz0801 commented 3 months ago

您确认一下使用的模型是base模型么?目前暂时没开放base模型,所以这个api一般情况下不会被调用才对。

hzhaoy commented 3 months ago

您确认一下使用的模型是base模型么?目前暂时没开放base模型,所以这个api一般情况下不会被调用才对。

不是base模型,是7B-Chat模型 执行inference_telechat/telechat_infer_demo.py中的这段代码 15276