unslothai / unsloth

Finetune Llama 3.2, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
17.52k stars 1.21k forks source link

Can't import unsloth when both the latest version of unsloth and transformers are installed #1179

Open lossflow opened 6 days ago

lossflow commented 6 days ago

To repro: Install the latest versions of unsloth and transformers

!pip uninstall unsloth -y && pip install --upgrade --no-cache-dir "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
!pip uninstall transformers -y && pip install --upgrade --no-cache-dir "git+https://github.com/huggingface/transformers.git"

try to import unsloth, receive error: /usr/local/lib/python3.10/dist-packages/unsloth/kernels/cross_entropy_loss.py in Unsloth_LlamaForCausalLM() NameError: name 'Unpack' is not defined

This doesn't affect demo notebooks as they don't import the latest version of transformers. An older version of transformers can be used but doesn't work as well with gradient accumulation and gives the warning "Unsloth: Using our custom gradient accumulation fixed trainer, which is not feature complete."

Austin1207 commented 6 days ago

both

lossflow commented 5 days ago

I can no longer repro after 3 attempts. Not sure what was happening.

lossflow commented 5 days ago

Repro-ed again.

Erland366 commented 5 days ago

Working on a fix!

R4ZZ3 commented 5 days ago

I am having the same issue

danielhanchen commented 5 days ago

@R4ZZ3 @lossflow Apologies fixed! Please update unsloth on local installations (or reload / restart Colab / Kaggle) - pip install --upgrade --no-cache-dir unsloth

urisolve commented 5 days ago

@R4ZZ3 @lossflow Apologies fixed! Please update unsloth on local installations (or reload / restart Colab / Kaggle) - pip install --upgrade --no-cache-dir unsloth

I confirm it is working now. Many thanks