baichuan-inc / Baichuan2

A series of large language models developed by Baichuan Intelligent Technology
https://huggingface.co/baichuan-inc
Apache License 2.0
4.08k stars 293 forks source link

合并后的模型chat时报错:generation_utils.py unsupported operand type(s) for -: 'int' and 'NoneType #360

Open growmuye opened 8 months ago

growmuye commented 8 months ago

def init_model(): print("init model ...") merged_model_path = "/temp/LLM_Merged/baichuan2-13b-chat-merged/zhuyitu" model = AutoModelForCausalLM.from_pretrained( merged_model_path, torch_dtype=torch.float16, device_map="auto", trust_remote_code=True ) model.generation_config = GenerationConfig.from_pretrained( merged_model_path ) tokenizer = AutoTokenizer.from_pretrained( merged_model_path, use_fast=False, trust_remote_code=True )

model = PeftModel.from_pretrained(model, "./outputs-sft-baichuan2-zhuyitu-v5")

return model, tokenizer

def main(stream=False): model, tokenizer = init_model() messages = [{"role": "user", "content": "你好"}] print(f"输入数据:{json.dumps(messages, ensure_ascii=False)}") response = model.chat(tokenizer, messages) print(f"输出数据:{json.dumps(response, ensure_ascii=False)}")

if name == "main": main()

错误信息:
Traceback (most recent call last):

File "/home/jovyan/MedicalGPT/test_merged.py", line 44, in main() File "/home/jovyan/MedicalGPT/test_merged.py", line 39, in main response = model.chat(tokenizer, messages) File "/home/jovyan/.cache/huggingface/modules/transformers_modules/zhuyitu/modeling_baichuan.py", line 816, in chat input_ids = build_chat_input(self, tokenizer, messages, generation_config.max_new_tokens) File "/home/jovyan/.cache/huggingface/modules/transformers_modules/zhuyitu/generation_utils.py", line 25, in build_chat_input max_input_tokens = model.config.model_max_length - max_new_tokens TypeError: unsupported operand type(s) for -: 'int' and 'NoneType'