pytorch / ao

PyTorch native quantization and sparsity for training and inference
BSD 3-Clause "New" or "Revised" License
1.57k stars 173 forks source link

is this only for linux? #957

Open FurkanGozukara opened 1 month ago

FurkanGozukara commented 1 month ago

I installed on windows and failing

from torchao.quantization import quantize_

pip freeze

Microsoft Windows [Version 10.0.19045.4894]
(c) Microsoft Corporation. All rights reserved.

R:\CogVideoX_v1\CogVideoX_SECourses\venv\Scripts>activate

(venv) R:\CogVideoX_v1\CogVideoX_SECourses\venv\Scripts>pip freeze
accelerate==0.34.2
aiofiles==23.2.1
annotated-types==0.7.0
anyio==4.6.0
certifi==2024.8.30
charset-normalizer==3.3.2
click==8.1.7
colorama==0.4.6
contourpy==1.3.0
cycler==0.12.1
decorator==4.4.2
diffusers @ git+https://github.com/huggingface/diffusers.git@665c6b47a23bc841ad1440c4fe9cbb1782258656
distro==1.9.0
einops==0.8.0
exceptiongroup==1.2.2
fastapi==0.115.0
ffmpy==0.4.0
filelock==3.16.1
fonttools==4.54.1
fsspec==2024.9.0
gradio==4.44.0
gradio_client==1.3.0
h11==0.14.0
httpcore==1.0.5
httpx==0.27.2
huggingface-hub==0.25.1
idna==3.10
imageio==2.35.1
imageio-ffmpeg==0.5.1
importlib_metadata==8.5.0
importlib_resources==6.4.5
Jinja2==3.1.4
jiter==0.5.0
kiwisolver==1.4.7
markdown-it-py==3.0.0
MarkupSafe==2.1.5
matplotlib==3.9.2
mdurl==0.1.2
moviepy==1.0.3
mpmath==1.3.0
networkx==3.3
numpy==1.26.0
openai==1.48.0
opencv-python==4.10.0.84
orjson==3.10.7
packaging==24.1
pandas==2.2.3
Pillow==9.5.0
proglog==0.1.10
psutil==6.0.0
pydantic==2.9.2
pydantic_core==2.23.4
pydub==0.25.1
Pygments==2.18.0
pyparsing==3.1.4
python-dateutil==2.9.0.post0
python-multipart==0.0.10
pytz==2024.2
PyYAML==6.0.2
regex==2024.9.11
requests==2.32.3
rich==13.8.1
ruff==0.6.8
safetensors==0.4.5
scikit-video==1.1.11
scipy==1.14.1
semantic-version==2.10.0
sentencepiece==0.2.0
shellingham==1.5.4
six==1.16.0
sniffio==1.3.1
spandrel==0.4.0
starlette==0.38.6
sympy==1.13.3
tokenizers==0.20.0
tomlkit==0.12.0
torch==2.4.1+cu124
torchao==0.1
torchvision==0.19.1+cu124
tqdm==4.66.5
transformers==4.45.0
triton @ https://huggingface.co/MonsterMMORPG/SECourses/resolve/main/triton-3.0.0-cp310-cp310-win_amd64.whl
typer==0.12.5
typing_extensions==4.12.2
tzdata==2024.2
urllib3==2.2.3
uvicorn==0.30.6
websockets==12.0
xformers==0.0.28.post1
zipp==3.20.2

(venv) R:\CogVideoX_v1\CogVideoX_SECourses\venv\Scripts>
Traceback (most recent call last):
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\transformers\utils\import_utils.py", line 1764, in _get_module
    return importlib.import_module("." + module_name, self.__name__)
  File "C:\Python3108\lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\transformers\models\t5\modeling_t5.py", line 38, in <module>
    from ...modeling_utils import PreTrainedModel
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\transformers\modeling_utils.py", line 58, in <module>
    from .quantizers import AutoHfQuantizer, HfQuantizer
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\transformers\quantizers\__init__.py", line 14, in <module>
    from .auto import AutoHfQuantizer, AutoQuantizationConfig
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\transformers\quantizers\auto.py", line 42, in <module>
    from .quantizer_torchao import TorchAoHfQuantizer
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\transformers\quantizers\quantizer_torchao.py", line 35, in <module>
    from torchao.quantization import quantize_
ImportError: cannot import name 'quantize_' from 'torchao.quantization' (R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\torchao\quantization\__init__.py)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\diffusers\utils\import_utils.py", line 830, in _get_module
    return importlib.import_module("." + module_name, self.__name__)
  File "C:\Python3108\lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\diffusers\pipelines\cogvideo\pipeline_cogvideox.py", line 21, in <module>
    from transformers import T5EncoderModel, T5Tokenizer
  File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\transformers\utils\import_utils.py", line 1755, in __getattr__
    value = getattr(module, name)
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\transformers\utils\import_utils.py", line 1754, in __getattr__
    module = self._get_module(self._class_to_module[name])
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\transformers\utils\import_utils.py", line 1766, in _get_module
    raise RuntimeError(
RuntimeError: Failed to import transformers.models.t5.modeling_t5 because of the following error (look up to see its traceback):
cannot import name 'quantize_' from 'torchao.quantization' (R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\torchao\quantization\__init__.py)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Python3108\lib\runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Python3108\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "c:\program files\microsoft visual studio\2022\community\common7\ide\extensions\microsoft\python\core\debugpy\__main__.py", line 39, in <module>
    cli.main()
  File "c:\program files\microsoft visual studio\2022\community\common7\ide\extensions\microsoft\python\core\debugpy/..\debugpy\server\cli.py", line 430, in main
    run()
  File "c:\program files\microsoft visual studio\2022\community\common7\ide\extensions\microsoft\python\core\debugpy/..\debugpy\server\cli.py", line 284, in run_file
    runpy.run_path(target, run_name="__main__")
  File "c:\program files\microsoft visual studio\2022\community\common7\ide\extensions\microsoft\python\core\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 321, in run_path
    return _run_module_code(code, init_globals, run_name,
  File "c:\program files\microsoft visual studio\2022\community\common7\ide\extensions\microsoft\python\core\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 135, in _run_module_code
    _run_code(code, mod_globals, init_globals,
  File "c:\program files\microsoft visual studio\2022\community\common7\ide\extensions\microsoft\python\core\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 124, in _run_code
    exec(code, run_globals)
  File "R:\CogVideoX_v1\CogVideoX_SECourses\app.py", line 14, in <module>
    from diffusers import (
  File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\diffusers\utils\import_utils.py", line 821, in __getattr__
    value = getattr(module, name)
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\diffusers\utils\import_utils.py", line 821, in __getattr__
    value = getattr(module, name)
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\diffusers\utils\import_utils.py", line 820, in __getattr__
    module = self._get_module(self._class_to_module[name])
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\diffusers\utils\import_utils.py", line 832, in _get_module
    raise RuntimeError(
RuntimeError: Failed to import diffusers.pipelines.cogvideo.pipeline_cogvideox because of the following error (look up to see its traceback):
Failed to import transformers.models.t5.modeling_t5 because of the following error (look up to see its traceback):
cannot import name 'quantize_' from 'torchao.quantization' (R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\torchao\quantization\__init__.py)
Press any key to continue . . .
jerryzh168 commented 1 month ago

it might be the torchao version is too low: "torchao==0.1", we introduced quantize_ in 0.4.0 I think: https://github.com/pytorch/ao/releases, but in the meantime our packages are only available in linux and mac right now I think

jcaip commented 1 month ago

Can you try updating torchao? I don't think the top level quantize_ API is available in 0.1

But from what I understand, torch.compile() does not work on windows because triton is lacking windows support, which we use to codegen our quantization kernels, so I wouldn't expect this to work on windows.

FurkanGozukara commented 1 month ago

i am about to test latest @jerryzh168 thank you will write here

FurkanGozukara commented 1 month ago

it might be the torchao version is too low: "torchao==0.1", we introduced quantize_ in 0.4.0 I think: https://github.com/pytorch/ao/releases, but in the meantime our packages are only available in linux and mac right now I think

what pip finds latest version is

ERROR: Could not find a version that satisfies the requirement torchao==0.4.0 (from versions: 0.0.1, 0.0.3, 0.1)

how can i install latest on windows? python 3.10 windows 10

if you have wheel link i can directly install

jerryzh168 commented 1 month ago

we don't have a windows build today I think, cc @atalman can you provide a pointer to support a build in windows as well?

FurkanGozukara commented 1 month ago

we don't have a windows build today I think, cc @atalman can you provide a pointer to support a build in windows as well?

awesome waiting to test thank you so much

gau-nernst commented 1 month ago

You can probably install torchao from source. If you don't need the CUDA extensions, you can do

USE_CPP=0 pip install git+https://github.com/pytorch/ao

But again, since torch.compile() doesn't work in windows, it's not very useful.

abhi-vandit commented 1 month ago

is there a way to make the quantizations work on windows+nvidia gpu without torch.compile and inductor backend? I am mostly concerned about inference speedups.

Skquark commented 1 month ago

I'm also in need of the wheel for torchao on Windows to get Quantization working for Flux, CogVideoX, etc. in my app. I'm fine without compile, but the other features are really needed to optimize vram. Tried installing from the github and running setup.py install from clone, but gave me errors. Hoping we can run something newer than v0.1 soon.. Thanks.

FurkanGozukara commented 1 month ago

I'm also in need of the wheel for torchao on Windows to get Quantization working for Flux, CogVideoX, etc. in my app. I'm fine without compile, but the other features are really needed to optimize vram. Tried installing from the github and running setup.py install from clone, but gave me errors. Hoping we can run something newer than v0.1 soon.. Thanks.

so true

from this logic (that use linux not windows) why do we even have Python on Windows?, PyTorch on Windows? xFormers on Windows? if such stuff is not necessary on Windows?

I don't get logic of forcing people to use Linux. If we follow this mindset, why we have all these on Windows?

gau-nernst commented 1 month ago

If you don't need the CUDA extensions (right now they are only for backing FPx and sparse marlin kernels I think), and you don't mind the lack of torch.compile() support, you can install torchao from source on Windows like I mentioned previously

set USE_CPP=0
pip install git+https://github.com/pytorch/ao

I don't have access to a Windows machine right now, so I just googled how to set environment variable on Windows here. You might need to adjust accordingly.

You are welcome to improve torchao experience on Windows. In fact, there are past PRs by the community, including me, that help build torchao successfully on Windows, including CUDA extension support.

abhi-vandit commented 1 month ago

If you don't need the CUDA extensions (right now they are only for backing FPx and sparse marlin kernels I think), and you don't mind the lack of torch.compile() support, you can install torchao from source on Windows like I mentioned previously

set USE_CPP=0
pip install git+https://github.com/pytorch/ao

I don't have access to a Windows machine right now, so I just googled how to set environment variable on Windows here. You might need to adjust accordingly.

You are welcome to improve torchao experience on Windows. In fact, there are past PRs by the community, including me, that help build torchao successfully on Windows, including CUDA extension support.

Thanks for the reply. I have a couple of clarifying questions - It seems that previously one was able to build torchao with cuda extension support on windows. What changed since then? Also, since torch.compile is not available on windows, what kind of speedups(if any) on gpu- can we expect for normal pytorch models quantized by torchao

gau-nernst commented 1 month ago

@abhi-vandit Since there is no Windows CI, there is no guarantee that new CUDA extensions in torchao can be built correctly on Windows. However, most of the errors usually come from Unix-specific features, thus the fix is usually simple e.g. #951 #396. I think torchao welcome small fixes like these.

I mentioned not building CUDA extensions previously since usually it's quite involved to set up C++ and CUDA compiler on Windows. So if you don't need the CUDA extensions, it's not really worth the efforts.

what kind of speedups(if any) on gpu- can we expect for normal pytorch models quantized by torchao

I think most likely you will only see slow down. Perhaps you can still get some memory savings.

abhi-vandit commented 1 month ago

@gau-nernst Thanks for the prompt reply. Hope this changes in near future and we are able to utilize quantization for inference time speedups on windows as well.

FurkanGozukara commented 1 month ago

If you don't need the CUDA extensions (right now they are only for backing FPx and sparse marlin kernels I think), and you don't mind the lack of torch.compile() support, you can install torchao from source on Windows like I mentioned previously

set USE_CPP=0
pip install git+https://github.com/pytorch/ao

I don't have access to a Windows machine right now, so I just googled how to set environment variable on Windows here. You might need to adjust accordingly.

You are welcome to improve torchao experience on Windows. In fact, there are past PRs by the community, including me, that help build torchao successfully on Windows, including CUDA extension support.

this worked

(venv) C:\Users\Furkan\Videos\a\venv\Scripts>pip freeze
filelock==3.13.1
fsspec==2024.2.0
Jinja2==3.1.3
MarkupSafe==2.1.5
mpmath==1.3.0
networkx==3.2.1
numpy==1.26.3
pillow==10.2.0
sympy==1.12
torch==2.4.1+cu124
torchao==0.6.0+git83d5b63
torchaudio==2.4.1+cu124
torchvision==0.19.1+cu124
typing_extensions==4.9.0
woct0rdho commented 1 month ago

Just want to share that I've successfully installed Triton on Windows and called torch.compile: https://github.com/jakaline-dev/Triton_win/issues/2

Update: I've published Triton wheels in my fork, and torchao.quantization.autoquant just works after installing torchao 0.5.0 from source https://github.com/woct0rdho/triton-windows

msaroufim commented 1 month ago

@woct0rdho did you notice any performance regressions between Windows and Linux? Cause if this works this is very cool we should consider making some broader announcement on pytorch.org if you're interested

woct0rdho commented 1 month ago

I did not do serious profiling yet. I don't dual boot Windows and Linux on the same machine, so I can only test Windows vs WSL on the same machine, and profiling the memory in WSL can be very tricky

What I'm sure is that autoquant indeed reduces memory usage for models like SDXL and Flux on Windows. For now I can also run these models without quantization, but I think it can be crucial for users with smaller GPUs

msaroufim commented 1 month ago

So keep in mind that APIs like quantize_() will make your model smaller but will not necessarily accelerate it since we rely heavily on later running torch.compile() to get competitive performance

So one sanity check you can do is make sure the Triton generated kernels seem reasonable by running TORCH_LOGS="output_code" python script.py

woct0rdho commented 1 month ago

Triton is already working. I've tried some simple test scripts, and I've seen users including myself get speedup when running large models like Flux and CogVideoX, but not in all cases

Some reports are here: https://www.reddit.com/r/StableDiffusion/comments/1g45n6n/triton_3_wheels_published_for_windows_and_working/

msaroufim commented 1 month ago

Yeah my sense is we can be a bit more principled about measuring performance. For example running this on all of pytorch/benchmark and seeing if there are serious perf gaps between Windows and Linux cause if the gap is small or gets smaller over time we could perhaps take a bigger dependency on your Triton fork and recommend people use it

cc @xuzhao9 who maintains torchbench

FurkanGozukara commented 1 month ago

that would be amazing if we can close the performance gap between windows and linux

woct0rdho commented 1 month ago

Yeah thank you, I'll try to catch up with this in my spare time

blap commented 3 weeks ago

Just want to share that I've successfully installed Triton on Windows and called torch.compile: jakaline-dev/Triton_win#2

Update: I've published Triton wheels in my fork, and torchao.quantization.autoquant just works after installing torchao 0.5.0 from source https://github.com/woct0rdho/triton-windows

I installed Triton on Windows (python.exe -m pip install triton-3.1.0-cp310-cp310-win_amd64.whl) but I can not install torchao from source because this error:

"C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.41.34120\bin\HostX86\x64\link.exe" /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:C:\Users\Admin\Desktop\TorchAO\venv\lib\site-packages\torch\lib "/LIBPATH:C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\lib\x64" /LIBPATH:C:\Users\Admin\Desktop\TorchAO\venv\libs "/LIBPATH:C:\Program Files\Python310\libs" "/LIBPATH:C:\Program Files\Python310" /LIBPATH:C:\Users\Admin\Desktop\TorchAO\venv\PCbuild\amd64 "/LIBPATH:C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.41.34120\ATLMFC\lib\x64" "/LIBPATH:C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.41.34120\lib\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\lib\um\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.22621.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\\lib\10.0.22621.0\\um\x64" c10.lib torch.lib torch_cpu.lib torch_python.lib cudart.lib c10_cuda.lib torch_cuda.lib /EXPORT:PyInit__C C:\Users\Admin\Desktop\TorchAO\ao\build\temp.win-amd64-cpython-310\Release\torchao\csrc\cuda\fp6_llm\fp6_linear.obj C:\Users\Admin\Desktop\TorchAO\ao\build\temp.win-amd64-cpython-310\Release\torchao\csrc\cuda\sparse_marlin\marlin_kernel_nm.obj C:\Users\Admin\Desktop\TorchAO\ao\build\temp.win-amd64-cpython-310\Release\torchao\csrc\cuda\tensor_core_tiled_layout\tensor_core_tiled_layout.obj C:\Users\Admin\Desktop\TorchAO\ao\build\temp.win-amd64-cpython-310\Release\torchao\csrc\init.obj /OUT:build\lib.win-amd64-cpython-310\torchao\_C.cp310-win_amd64.pyd /IMPLIB:C:\Users\Admin\Desktop\TorchAO\ao\build\temp.win-amd64-cpython-310\Release\torchao\csrc\cuda\fp6_llm\_C.cp310-win_amd64.lib

Criando biblioteca C:\Users\Admin\Desktop\TorchAO\ao\build\temp.win-amd64-cpython-310\Release\torchao\csrc\cuda\fp6_llm\_C.cp310-win_amd64.lib e objeto C:\Users\Admin\Desktop\TorchAO\ao\build\temp.win-amd64-cpython-310\Release\torchao\csrc\cuda\fp6_llm\_C.cp310-win_amd64.exp
    fp6_linear.obj : error LNK2001: s¡mbolo externo nÆo resolvido "void __cdecl SplitK_Reduction(struct __half *,float *,unsigned __int64,unsigned __int64,int)" (?SplitK_Reduction@@YAXPEAU__half@@PEAM_K2H@Z)
    build\lib.win-amd64-cpython-310\torchao\_C.cp310-win_amd64.pyd : fatal error LNK1120: 1 externo nÆo resolvidos
    error: command 'C:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.41.34120\\bin\\HostX86\\x64\\link.exe' failed with exit code 1120
    [end of output]

What am I missing?

woct0rdho commented 3 weeks ago

@blap It looks like an error when linking against CUDA