unslothai / unsloth

Finetune Llama 3.2, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
17.37k stars 1.2k forks source link

No module named 'triton.common' #381

Open enzoli977 opened 6 months ago

enzoli977 commented 6 months ago

i was trying to fine tune llama model locally with Win10, after installing the necessary environment, I tried: from unsloth import FastLanguageModel: and got : No module named 'triton.common' i installed triton 2.0.0 from https://huggingface.co/r4ziel/xformers_pre_built/tree/main Did I install the wrong package? Thanks!

danielhanchen commented 6 months ago

@enzoli977 Oh no Windows is a complex beast :( Tbh only WSL and Linux and Apple devices with NVIDIA GPUs are mostly supported, so you'll have to follow https://github.com/unslothai/unsloth/issues/210 for Windows support

NicolasMejiaPetit commented 6 months ago

@enzoli977

Windows is a huge complicated garbage fire. Use the wheels I showed in the home.md in #210 along with installing the proper visual studio build tools, shown in a detailed picture at the bottom. It honestly isn’t supposed to work, it just does. I can’t expect this to work for ever, half the things are deprecated in Microsoft MVSC’s and some how that compiled triton works (in the home.md link) with it. Also I believe you didn’t install the triton libraries and add them to path, thats quite essential. Either way use my 2.1 triton wheel and the triton libraries I link in there. If you dont want a “trust me bro” find wkpark on github he made the wheels and libraries he might have made more up to date ones too. I only made the deepspeed wheels, to this date I still have no clue how he compiled triton wheels for windows. Still gotta figure it out, so if triton gets a major update I’m not screwed.

madiator commented 5 months ago

I see this on some linux instances in runpod.

danielhanchen commented 5 months ago

@madiator I'm assuming its torch 2.3 - change xformers to "xformers<0.0.26"

zzerrrro commented 5 months ago

i was trying to fine tune llama model locally with Win10, after installing the necessary environment, I tried: from unsloth import FastLanguageModel: and got : No module named 'triton.common' i installed triton 2.0.0 from https://huggingface.co/r4ziel/xformers_pre_built/tree/main Did I install the wrong package? Thanks!

Have you already resolved it?

beautifulsonset commented 3 months ago

Have you already resolved of No module named 'triton.common' ?

danielhanchen commented 3 months ago

It should function hopefully! I added some checks to not call triton.common in triton v3 or higher

mattsthilaire commented 1 month ago

Seem to still be getting this error on both Runpod and Lambda Labs. Just copy + pasted the pip installs from the colab tutorials:

!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
!pip install --no-deps "xformers<0.0.27" "trl<0.9.0" peft accelerate bitsandbytes

Tried simply doing an upgrade to triton to 3.0.0 (just basically trying to bypass the line that causes this issue) and then ended up getting a "module 'torch' has no attribute 'compiler'". After trying trial and error pip installs for triton and torch version, kept getting various errors. Any ideas on this? If it helps for context, using Runpod and Lambda Labs because I want to try that fine tuning on 70B llama using that sweet, sweet A100.