jianzhnie / LLamaTuner

Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.
https://jianzhnie.github.io/llmtech/
Apache License 2.0
561 stars 61 forks source link

utils/apply_lora.py 有个小Bug #10

Closed apachemycat closed 1 year ago

apachemycat commented 1 year ago

apply_lora(args.base_model_path, args.target_model_path, args.lora_path, args.save_target_model)

def apply_lora( base_model_path: str, lora_path: str, load_8bit: bool = False, target_model_path: str = None, save_target_model: bool = False ) -> Tuple[AutoModelForCausalLM, AutoTokenizer]: 两个变量的位置错了

jianzhnie commented 1 year ago

非常感谢,bug 已修复