unslothai / unsloth

Finetune Llama 3.2, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
16.51k stars 1.14k forks source link

about finetuning #1051

Open djahzo opened 1 week ago

djahzo commented 1 week ago

I follow the README.md,but I get this ‘TypeError: LlamaRotaryEmbedding.init() got an unexpected keyword argument 'config'’

djannot commented 1 week ago

I got the same issue. It was because I cloned the unsloth repo in the directory where I was trying to run the training program on.

danielhanchen commented 5 days ago

Sorry on the delay!! Would updating Unsloth work?

pip uninstall unsloth -y
pip install --upgrade --no-cache-dir "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"