Closed Gary2018X closed 5 months ago
If it's bunny_qwen, why can't I reason normally after merge_lora?
this is my code python script/merge_lora_weights.py \ --model-path ./checkpoints-qwen1.5-1.8b/bunny-lora-qwen1.5-1.8b \ --model-base /root/siton-glusterfs-eaxtsxdfs/xts/models/Qwen1.5-1.8B \ --model-type qwen1.5-1.8b \ --save-model-path ./merged_model
but the final model_type still is bunny_qwen
"but the final model_type still is bunny_qwen" is correct.
But when evaluating, you should pass model_type = qwen1.5-1.8b
.
Please read the README carefully.
"why can't I reason normally after merge_lora?" Please share the error information.
The snippet in Quickstart is only used for Bunny-v1.0-3B (SigLIP + Phi-2) and Bunny-v1.0-2B-zh (SigLIP + Qwen1.5-1.8B). We combine some configuration code into a single file for users' convenience. Also, you can check modeling_bunny_qwen2.py
and configuration_bunny_qwen2.py
and their related parts in the source code of Bunny to see the difference.
For other models including models trained by yourself, we currently only support loading them with installing source code of Bunny. Or you can copy modeling_bunny_qwen2.py
and configuration_bunny_qwen2.py
into your model and edit config.json
.
Thanks so much。
I have solved this problem i copied modeling_bunny_qwen2.py and configuration_bunny_qwen2.py into my model, then i changed the configs.json model_type to bunny_qwen2 and add "auto_map": { "AutoConfig": "configuration_bunny_qwen2.BunnyQwen2Config", "AutoModelForCausalLM": "modeling_bunny_qwen2.BunnyQwen2ForCausalLM" },
Thanks for your great work! my train sh why final lora config model_type is bunny-qwen?