jianzhnie / LLamaTuner

Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.
https://jianzhnie.github.io/llmtech/
Apache License 2.0
561 stars 61 forks source link

合并的Bug修复,等会就出来验证结果了 #14

Open apachemycat opened 1 year ago

apachemycat commented 1 year ago

if target_model_path is not None: print(f'Saving the target model to {target_model_path}') model.save_pretrained(target_model_path) base_tokenizer.save_pretrained(target_model_path)

这里还需要增加一个函数调用。 lora_model = lora_model.merge_and_unload() merge_and_unload函数在16位加载模型的时候有用,8bit不行 下面这个不确定要不要调用 lora_model.train(False)