Open achalpandeyy opened 1 year ago
I really want this as well! It would be so useful for so many things!
I would also be really grateful if it happens.
This is a pretty frequent request. Let me see what we can do about it.
+1
With torch.compile heavily relying on Triton, lots of Hugging Face users are also interested in this it seems :-)
We have a number of interested parties optimizing inference times for Invoke AI on Windows. We're currently evaluating alternatives, but as @patrickvonplaten noted above, torch.compile is the most straightforward but requires Triton.
+1
Get "RuntimeError: Windows not yet supported for torch.compile" in CUDA 12.1 & Pytorch 2.1.0, it seems that Triton is the main main reason which is not available on Windows, how we can get Windows version
+1
+1,many python packages support windows,and hope this as well,
+1
+1
+1
+1
@ptillet anything we could do to help you implement this? With PyTorch 2.+ becoming more and more dependent on Triton this feature request will only become more and more important I tihnk.
Can we help you here in any way?
please add support to windows
i hate to see triton not available on windows message
The way to help here is probably to just submit a PR that adds windows support :) we won't have CI for it though soon
The issues / solutions to them found so far: (also somewhat related to #1560)
fixing url issue ValueError: unknown url type: ''
- it seems LLVM_SYSPATH
is not set in system path and this. I added it but still doesn't work properly for me. Workaround for it was settings up the variable manually in setup.py
-- os.environ['LLVM_SYSPATH'] = 'path/to/llvm_build
another issue i found was with the target / build type. I couldn't make MSYS / ninja generator working so I am just using my default - Visual Studio 17 2022. I had to force get_build_type
function to return RelWithDebInfo
next issue I got is that MLIRGPUOps
(and the other 2 files in Conversion) doesn't exist in build. As I am using llvm 17 [built from master] (version 17 is used on linux ) it seems it was renamed to MLIRGPUDialect
another issue I got is that I couldn't build with VS + clang (i got some error with -f
flag), so had to stay with MSVC. I got error about /Werror
value being incorrectly set. Had to change configuration to just set(CMAKE_CXX_FLAGS "/std:c++17")
Currently stuck because 'C:\Users\potato\Desktop\llvm-project\build\RelWithDebInfo\bin\mlir-tblgen.exe' is not recognized as an internal or external command, operable program or batch file.
It seems there is some issue with it not being built
+1
+1
Were there any fork for this?
There is this repo but i don't know : https://github.com/PrashantSaikia/Triton-for-Windows
+1
+1
+1
+1
+1
+1
I'm also trying to build a llvm-17.0.0-c5dede880d17 compiled for Windows with Github actions here: https://github.com/andreigh/triton-llvm-windows
I'm also trying to build a llvm-17.0.0-c5dede880d17 compiled for Windows with Github actions here: https://github.com/andreigh/triton-llvm-windows
you don't have any release will you do? i would like to install and test
if i merge your pull request locally how can i install on windows what command?
assume that i cloned repo merged your pull request then what?
* Currently stuck because `'C:\Users\potato\Desktop\llvm-project\build\RelWithDebInfo\bin\mlir-tblgen.exe' is not recognized as an internal or external command, operable program or batch file.` It seems there is some issue with it [not being built](https://github.com/llvm/llvm-project/issues/64150)
This seems to be fixed in b1115f8c? I can build it without problem. Now I can build triton but not any backend. There are some gcc-only code that I have no idea how to modify it for msvc.
+1
+1
there are pre compiled wheels right now but i don't know how accurate and good they are working
still working though
there are pre compiled wheels right now but i don't know how accurate and good they are working
still working though
They do nothing except to shut up the programs complaining (such as Kohya, etc...). It seems it will take an official release but I am not sure it can even be done in Windows as a lot of the *nix stuff can't.
there are pre compiled wheels right now but i don't know how accurate and good they are working still working though
They do nothing except to shut up the programs complaining (such as Kohya, etc...). It seems it will take an official release but I am not sure it can even be done in Windows as a lot of the *nix stuff can't.
well i have to say that this library do not have official support for Windows Python is unacceptable
there are pre compiled wheels right now but i don't know how accurate and good they are working still working though
They do nothing except to shut up the programs complaining (such as Kohya, etc...). It seems it will take an official release but I am not sure it can even be done in Windows as a lot of the *nix stuff can't.
well i have to say that this library do not have official support for Windows Python is unacceptable
I agree with a caveat that it may not be doable on Windows as I mentioned. Not everything can be done on Windows that can be on nix and it is the fault of Windows since W95/W98/W2000/WinME/XP/Vista and so on. For server type stuff, and ML/AI type stuff nix was made for that.
there are pre compiled wheels right now but i don't know how accurate and good they are working still working though
They do nothing except to shut up the programs complaining (such as Kohya, etc...). It seems it will take an official release but I am not sure it can even be done in Windows as a lot of the *nix stuff can't.
well i have to say that this library do not have official support for Windows Python is unacceptable
I agree with a caveat that it may not be doable on Windows as I mentioned. Not everything can be done on Windows that can be on nix and it is the fault of Windows since W95/W98/W2000/WinME/XP/Vista and so on. For server type stuff, and ML/AI type stuff nix was made for that.
well this library obviously possible because i am using pre compiled wheel
You are using a dummy pre-compiled, and I know this because I have tried them all and none gave me the speed up it does on Linux. Sure, we can compile it, but that doesn't make it actually work. Iow, a dummy file.
You are using a dummy pre-compiled, and I know this because I have tried them all and none gave me the speed up it does on Linux. Sure, we can compile it, but that doesn't make it actually work. Iow, a dummy file.
i see could be
I really wish this worked on Windows, but I swear I remember reading from the devs last year a thread that said they would never be on Windows.
+1
+1
+1
+1
+1
Even bitsandbytes now supports Windows Officially
Even bitsandbytes now supports Windows Officially
If only the triton devs would just come clean and publicly state NO, and why not, or Yes, and when to expect it.
+1
+1
please 🥺
+1 to this 🥺
I have noticed that the README states Linux as the only compatible platform. https://github.com/openai/triton#compatibility
Some people in the past have managed to compile on Windows https://github.com/openai/triton/issues/871 (there is even a really old PR for Windows support https://github.com/openai/triton/pull/24). But still going by the README, I suppose something changed and Triton doesn't support Windows anymore? I haven't tried to compile it myself yet.
I'm interested in the development of this repository but my main OS is Windows. I'm aware that I can probably use WSL2 but still I would prefer to run it on Windows natively. So my question is: is there a plan to officially support Windows? If so, I can help.