unslothai / unsloth

Finetune Llama 3.1, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
14.42k stars 951 forks source link

Can we suport qwen series? #436

Open zhangfan-algo opened 3 months ago

erwe324 commented 3 months ago

Llamafied Qwen models can be directly loaded

zhangfan-algo commented 3 months ago

Can you provide a script for converting the format?

erwe324 commented 3 months ago

Some of the previous issues on the same topic have them. As for the model, you can search hugging face for qwen llamafied keywords.

danielhanchen commented 3 months ago

Oh I think someone made a PR for Qwen

danielhanchen commented 3 months ago

It should be supported everyone! @zhangfan-algo @erwe324! If you're on a local machine, please update Unsloth via

pip uninstall unsloth -y
pip install --upgrade --force-reinstall --no-cache-dir git+https://github.com/unslothai/unsloth.git

Colab and Kaggle is fine (just restart it)