mymusise / ChatGLM-Tuning

基于ChatGLM-6B + LoRA的Fintune方案
MIT License
3.71k stars 444 forks source link

infer中如何载finetuning的模型 #185

Open dongdongrj opened 1 year ago

dongdongrj commented 1 year ago

看到在infer.ipynb中有如下两处代码 1、model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True, load_in_8bit=True, device_map=' 2、model = PeftModel.from_pretrained(model, "./output/")

请问,第一次调用是加载了原始的model,第二次调用时是加载了微调后的model的什么参数?第二次调用是将微调后的参数更新了原始的model吗?

yyyhz commented 1 year ago

同问,蹲一个大佬回复。

Ambier commented 1 year ago

按照这个代码: from transformers import AutoModel import torch

model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True, load_in_8bit=True, device_map='auto')

model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True, device_map='auto') from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) from peft import PeftModel

model = PeftModel.from_pretrained(model, "./output/") import json

instructions = json.load(open("data/alpaca_data.json")) answers = [] from cover_alpaca2jsonl import format_example

with torch.autocast("cuda"): for idx, item in enumerate(instructions[:3]): feature = format_example(item) input_text = feature['context'] ids = tokenizer.encode(input_text) input_ids = torch.LongTensor([ids]) out = model.generate( input_ids=input_ids, max_length=150, do_sample=False, temperature=0 ) out_text = tokenizer.decode(out[0]) answer = out_text.replace(input_text, "").replace("\nEND", "").strip() item['infer_answer'] = answer print(out_text) print(f"### {idx+1}.Answer:\n", item.get('output'), '\n\n') answers.append({'index': idx, **item})