Open NicolasMejiaPetit opened 8 months ago
I can provide wheels, if you would like to make an auto install for 3.11, with triton, and deepspeed wheels for cuda 121. (Idk if cuda matters for this, can’t remember it was long ago I managed installed deepspeed and triton on native windows) although you do still need a set of libraries in path, but there is a link to that inside the github inside the twitter thread.
@NickWithBotronics Great work again!! Oh that'll be sick if wheels could be provided - I could copy paste your exact path and tack it into pyproject.toml
Uploaded wheels to google drive.
https://drive.google.com/drive/folders/1aWSFb-ZR8TTIDdRlDBBCh-YvvCxmt6Bc?usp=sharing
i had to do this modification to the code: (although that might be an issue with the training script i was using) " Delete this line it is at line 1291 in C:\Users\Training\AppData\Local\Programs\Python\Python311\Lib\site-packages\unsloth\models\llama.py " for module in target_modules: assert(module in accepted_modules) pass " and lastly ( You will likely still need vs studio 2022 (don't forget the libraries available here https://github.com/wkpark/triton/releases/tag/llvm-build-for-win and honestly i would just add it into Nvidia computing folder, then add the bin folder inside that into your environment variables.
@NickWithBotronics Thanks :) Actually would you be interested in adding a section to our https://github.com/unslothai/unsloth/wiki on how to install it for Windows? :)
Yeah I’d love to help in anyway I can, this repository is awesome!
@NickWithBotronics That'll be sick!! :) Actually unsure how editting wiki pages work lol
To add onto this - I ran into an issue on native windows with xformers when building unsloth, so I forked my own version and edited the pyproject.toml to use the xformers wheel for my python/torch/cuda, then built from source without any more issues: cu121onlytorch220 = [ "xformers @ https://download.pytorch.org/whl/cu121/xformers-0.0.24-cp39-cp39-manylinux2014_x86_64.whl ; python_version=='3.9'", "xformers @ https://download.pytorch.org/whl/cu121/xformers-0.0.24-cp310-cp310-manylinux2014_x86_64.whl ; python_version=='3.10'", "xformers @ https://files.pythonhosted.org/packages/02/10/aaa3b7547fec9e28948e723fb97585200a7070e810b4d1d0813fc1821690/xformers-0.0.24-cp311-cp311-win_amd64.whl ; python_version=='3.11'", ]
Actually i do remember running into an xformers issue, i had ran 'pip install xformers' and pip found that 3.11 package for windows and installed it. Thanks for reporting! will include in the read me!
Home.md Cant create a pr since I'm not a contributor but here is the updated wiki with windows install instructions.
I don't have a gitlfs account to share the wheels at but if you would like to replace the link so its not a link to my google drive feel free.
I fixed the read me to show how to fix xformers linux compatibility error.
Then the command prompt output should look like this: Command-Prompt Output
-The back end of Deepspeed, uses Ninja mixed with c++ to compile kernels at run time, does this mess with unsloth idk? Either way it’s a easy install so just install it to be safe. ‘pip install ninja’
I got unsloth running in native windows, (no wsl). You need visual studio 2022 c++ compiler, triton, and deepspeed. I have a full tutorial on installing it, I would write it all here but I’m on mobile rn. Here is the link. https://x.com/mejia_petit/status/1763391797575741707?s=46
This link is empty
I got unsloth running in native windows, (no wsl). You need visual studio 2022 c++ compiler, triton, and deepspeed. I have a full tutorial on installing it, I would write it all here but I’m on mobile rn. Here is the link. https://x.com/mejia_petit/status/1763391797575741707?s=46
This link is empty
Yeahhh, twitter aka X likes to change their links around like crazy. Use the instructions at the bottom of the read.md, they are more thorough than the twitter thread. And use the picture to see what to download for build tools.
Thank you for your contribution. At the same time, I also made a docker image that can directly use unsloth locally. https://github.com/Jiar/jupyter4unsloth
@NicolasMejiaPetit @danielhanchen Heyy!!! I actually ran into ModuleNotFoundError: No module named 'triton' while fine-tuning google/gemma-7b-it. I installed xformers successfully through a documentation that I found by Unsloth but while running From unsloth import FastLanguageModel I was thrown this error. I've been trying to solve this error quite sometime and can't seem to find any reddit or git solutions to this triton issue. Also I am working on windows 11.
the error kinda looks like this:
Traceback (most recent call last):
File "C:\Users\Megha\Gemma_DPO\sft_trainer.py", line 14, in
from unsloth import FastLanguageModel
File "C:\Users\Megha\Gemma_DPO\sanedai\Lib\site-packages\unsloth__init__.py", line 103, in
Please help I'm really stuck! Thanks in advance. Good day!
Ye it's best to follow the community guides posted here to install Unsloth on Windows - sadly I'm limited in bandwidth, so I can't yet "officially" support Windows yet
@NicolasMejiaPetit @danielhanchen Heyy!!! I actually ran into ModuleNotFoundError: No module named 'triton' while fine-tuning google/gemma-7b-it. I installed xformers successfully through a documentation that I found by Unsloth but while running From unsloth import FastLanguageModel I was thrown this error. I've been trying to solve this error quite sometime and can't seem to find any reddit or git solutions to this triton issue. Also I am working on windows 11. the error kinda looks like this: Traceback (most recent call last): File "C:\Users\Megha\Gemma_DPO\sft_trainer.py", line 14, in from unsloth import FastLanguageModel File "C:\Users\Megha\Gemma_DPO\sanedai\Lib\site-packages\unslothinit.py", line 103, in import triton ModuleNotFoundError: No module named 'triton'
Please help I'm really stuck! Thanks in advance. Good day!
Make sure you're using the Triton wheel here, as the standard install isn't Windows compatible: https://drive.google.com/drive/folders/1aWSFb-ZR8TTIDdRlDBBCh-YvvCxmt6Bc
Uploaded wheels to google drive.
https://drive.google.com/drive/folders/1aWSFb-ZR8TTIDdRlDBBCh-YvvCxmt6Bc?usp=sharing
i had to do this modification to the code: (although that might be an issue with the training script i was using) " Delete this line it is at line 1291 in C:\Users\Training\AppData\Local\Programs\Python\Python311\Lib\site-packages\unsloth\models\llama.py " for module in target_modules: assert(module in accepted_modules) pass " and lastly ( You will likely still need vs studio 2022 (don't forget the libraries available here https://github.com/wkpark/triton/releases/tag/llvm-build-for-win and honestly i would just add it into Nvidia computing folder, then add the bin folder inside that into your environment variables.
I know asking too much. A lot of AI tools need python 3.10 and not supported on higher versions. If you get time need triton wheel for python 3.10.
@NicolasMejiaPetit
Uploaded wheels to google drive.
Tested the wheel on a python 3.11 conda environment and it worked great. There are two pre-reqs to using it though, to enable the required on-the-fly triton JIT compilation which unsloth via triton triggers when the trainer runs (trainer.train).
CC
1 must be set to the exact value cl
. (*)When those two conditions are satisfied, your wheel works fine with the unsloth trainer.
I personally solved this little gotcha by having the following at the top of my training scripts or notebooks:
# windows triton fix
import os, shutil
os.environ['CC'] = 'cl' # for triton jit compilation
assert shutil.which('cl') is not None, "VS Studio shell environment not initialized (prereq for triton)"
*) I asume you added the _cc_cmd function, since I can't find it in the triton repository's history. It's not part of 2.1.0.
The code in your build looks like this in contrast to upstream:
else:
cc_cmd = _cc_cmd(cc, src, so, [cu_include_dir, py_include_dir, srcdir], [*cuda_lib_dirs, *py_lib_dirs])
ret = subprocess.check_call(cc_cmd)
and _cc_cmd
makes a hard assumption that CC is named "cl" exactly, as well as the MSVC environment being already initialized.
def _cc_cmd(cc, src, out, include_dirs, library_dirs):
if cc == "cl":
cc_cmd = [cc, src, "/nologo", "/O2", "/LD"]
cc_cmd += [f"/I{dir}" for dir in include_dirs]
cc_cmd += ["/link"]
cc_cmd += [f"/LIBPATH:{dir}" for dir in library_dirs]
cc_cmd += ["cuda.lib", f"/OUT:{out}"]
else:
cc_cmd = [cc, src, "-O3", "-shared", "-fPIC"]
cc_cmd += [f"-I{dir}" for dir in include_dirs]
cc_cmd += [f"-L{dir}" for dir in library_dirs]
cc_cmd += ["-lcuda", "-o", out]
if os.name == "nt": cc_cmd.pop(cc_cmd.index("-fPIC"))
return cc_cmd
I got unsloth running in native windows, (no wsl). You need visual studio 2022 c++ compiler, triton, and deepspeed. I have a full tutorial on installing it, I would write it all here but I’m on mobile rn. Here is the link. https://x.com/mejia_petit/status/1763391797575741707?s=46