jianzhnie / LLamaTuner

Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.
https://jianzhnie.github.io/llmtech/
Apache License 2.0
576 stars 63 forks source link

训练成功了,但是没有合并的脚本,请问如何合并? #7

Open apachemycat opened 1 year ago

apachemycat commented 1 year ago

Can't find 'adapter_config.json' at '/wzh/Chinese-Guanaco/models/guanaco-13b-merged-wzhtrain

jianzhnie commented 1 year ago

Here is the merge code: https://github.com/jianzhnie/Efficient-Tuning-LLMs/blob/main/chatllms/utils/apply_lora.py

Morxrc commented 1 year ago

404 - page not found

jianzhnie commented 1 year ago

https://github.com/jianzhnie/Efficient-Tuning-LLMs/blob/main/chatllms/utils/apply_lora.py

jianzhnie commented 1 year ago

CUDA_VISIBLE_DEVICES=0 python chatllms/utils/apply_lora.py \ --base-model-path ~/checkpoints/baichuan7b/ \ --lora-model-path ./work_dir/vicuna_merge_vicuna-baichuan-7b-1gpu/checkpoint-15000 \ --target-model-path ./work_dir/vicuna_merge_vicuna-baichuan-7b/merged_model