jianzhnie / LLamaTuner

Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.
https://jianzhnie.github.io/llmtech/
Apache License 2.0
561 stars 61 forks source link

合并代码还有一个错误,中间少了一个变量 #12

Closed apachemycat closed 1 year ago

apachemycat commented 1 year ago

刚才还发现一个问题,少个参数,导致保存模型的逻辑没生效,代码如下: parser.add_argument('--load_8bit', type=bool, default=False)

args = parser.parse_args()

apply_lora(args.base_model_path,args.lora_path,**args.load_8bit,** args.target_model_path,
           args.save_target_model)
jianzhnie commented 1 year ago

bugs have solved: https://github.com/jianzhnie/Efficient-Tuning-LLMs/blob/main/utils/apply_lora.py#L85