unslothai / unsloth

Finetune Llama 3.2, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
18.57k stars 1.3k forks source link

3B finetuned model - being Merged in to 7b Model, When saving to use in VLLM #1213

Closed pusapatiakhilraju closed 2 weeks ago

pusapatiakhilraju commented 4 weeks ago

On running the below collab file by Unsloth, from start to end. The saved method is save_method = "merged_16bit". However, it's being merged into llama 7b. Screenshot 2024-10-29 at 3 21 15 PM

The base model i am using is - "unsloth/Llama-3.2-3B-Instruct. So needs to be merged into a 3b model. Any help here is appreciated.

How can I merge it into a 3b model?

https://colab.research.google.com/drive/1T5-zKWM_5OD21QHwXHiV9ixTRR7k3iB9?usp=sharing#scrollTo=iHjt_SMYsd3P

danielhanchen commented 4 weeks ago

Oh that;'s just a warning message - ignore it! :)