lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.67k stars 174 forks source link

[Bug]: Fresh install - wrong torch install #467

Open Acrivec opened 1 month ago

Acrivec commented 1 month ago

Checklist

What happened?

It seems that running webui-user.bat installs wrong torch version on first start. I've made a fresh install of python, git (...etc, I haven't done anything on this system yet with SD). Tried installing this fork according to the install instructions, yet it always finished with error

  File "P:\StableDiffusion\stable-diffusion-webui-directml\launch.py", line 48, in <module>
    main()
  File "P:\StableDiffusion\stable-diffusion-webui-directml\launch.py", line 39, in main
    prepare_environment()
  File "P:\StableDiffusion\stable-diffusion-webui-directml\modules\launch_utils.py", line 594, in prepare_environment
    raise RuntimeError(
RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
Press any key to continue . . .

Yes, I did reinstall and deleted venv.

I've found a similar bug report here, https://github.com/lshqqytiger/stable-diffusion-webui-amdgpu/issues/432 As this user stated, I've visited https://www.reddit.com/r/StableDiffusion/comments/18vmkon/comment/khpbnvo/ and followed instruction adding --skip-torch-cuda-test --use-directml --reinstall-torch.

It has downloaded different versions, or stuff that was not on the list alltogether, as of reinstalling normally shows 'using cached version' when downloading/installing, and when I've ran webui-user.bat with the above arguments, it started downloading a lot of stuff. And I've saved the logs for those:

``` Code block collapsed, click to show ``` ```bat P:\StableDiffusion\stable-diffusion-webui-directml>webui-user.bat venv "P:\StableDiffusion\stable-diffusion-webui-directml\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: v1.9.3-amd-13-g517aaaff Commit hash: 517aaaff2bb1a512057d88b0284193b9f23c0b47 Installing torch and torchvision Collecting torch==2.0.0 Downloading torch-2.0.0-cp310-cp310-win_amd64.whl (172.3 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 172.3/172.3 MB 24.2 MB/s eta 0:00:00 Collecting torchvision==0.15.1 Downloading torchvision-0.15.1-cp310-cp310-win_amd64.whl (1.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 79.0 MB/s eta 0:00:00 Collecting torch-directml Downloading torch_directml-0.2.1.dev240521-cp310-cp310-win_amd64.whl (8.8 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 8.8/8.8 MB 56.3 MB/s eta 0:00:00 Requirement already satisfied: typing-extensions in p:\stablediffusion\stable-diffusion-webui-directml\venv\lib\site-packages (from torch==2.0.0) (4.11.0) Requirement already satisfied: filelock in p:\stablediffusion\stable-diffusion-webui-directml\venv\lib\site-packages (from torch==2.0.0) (3.14.0) Requirement already satisfied: jinja2 in p:\stablediffusion\stable-diffusion-webui-directml\venv\lib\site-packages (from torch==2.0.0) (3.1.4) Requirement already satisfied: networkx in p:\stablediffusion\stable-diffusion-webui-directml\venv\lib\site-packages (from torch==2.0.0) (3.3) Requirement already satisfied: sympy in p:\stablediffusion\stable-diffusion-webui-directml\venv\lib\site-packages (from torch==2.0.0) (1.12) Collecting requests Downloading requests-2.32.2-py3-none-any.whl (63 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 63.9/63.9 kB ? eta 0:00:00 Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in p:\stablediffusion\stable-diffusion-webui-directml\venv\lib\site-packages (from torchvision==0.15.1) (10.3.0) Requirement already satisfied: numpy in p:\stablediffusion\stable-diffusion-webui-directml\venv\lib\site-packages (from torchvision==0.15.1) (1.26.4) Collecting torch-directml Downloading torch_directml-0.2.0.dev230426-cp310-cp310-win_amd64.whl (8.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 8.2/8.2 MB 52.2 MB/s eta 0:00:00 Requirement already satisfied: MarkupSafe>=2.0 in p:\stablediffusion\stable-diffusion-webui-directml\venv\lib\site-packages (from jinja2->torch==2.0.0) (2.1.5) Collecting idna<4,>=2.5 Downloading idna-3.7-py3-none-any.whl (66 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 66.8/66.8 kB ? eta 0:00:00 Collecting certifi>=2017.4.17 Downloading certifi-2024.2.2-py3-none-any.whl (163 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 163.8/163.8 kB ? eta 0:00:00 Collecting urllib3<3,>=1.21.1 Downloading urllib3-2.2.1-py3-none-any.whl (121 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 121.1/121.1 kB ? eta 0:00:00 Collecting charset-normalizer<4,>=2 Downloading charset_normalizer-3.3.2-cp310-cp310-win_amd64.whl (100 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100.3/100.3 kB ? eta 0:00:00 Requirement already satisfied: mpmath>=0.19 in p:\stablediffusion\stable-diffusion-webui-directml\venv\lib\site-packages (from sympy->torch==2.0.0) (1.3.0) Installing collected packages: urllib3, idna, charset-normalizer, certifi, torch, requests, torchvision, torch-directml Attempting uninstall: torch Found existing installation: torch 2.3.0+cu121 Uninstalling torch-2.3.0+cu121: Successfully uninstalled torch-2.3.0+cu121 Attempting uninstall: torchvision Found existing installation: torchvision 0.18.0+cu121 Uninstalling torchvision-0.18.0+cu121: Successfully uninstalled torchvision-0.18.0+cu121 Successfully installed certifi-2024.2.2 charset-normalizer-3.3.2 idna-3.7 requests-2.32.2 torch-2.0.0 torch-directml-0.2.0.dev230426 torchvision-0.15.1 urllib3-2.2.1 [notice] A new release of pip available: 22.2.1 -> 24.0 [notice] To update, run: P:\StableDiffusion\stable-diffusion-webui-directml\venv\Scripts\python.exe -m pip install --upgrade pip Installing clip Installing open_clip Cloning assets into P:\StableDiffusion\stable-diffusion-webui-directml\repositories\stable-diffusion-webui-assets... Cloning into 'P:\StableDiffusion\stable-diffusion-webui-directml\repositories\stable-diffusion-webui-assets'... remote: Enumerating objects: 20, done. remote: Counting objects: 100% (20/20), done. remote: Compressing objects: 100% (18/18), done. remote: Total 20 (delta 0), reused 20 (delta 0), pack-reused 0 Receiving objects: 100% (20/20), 132.70 KiB | 2.21 MiB/s, done. Cloning Stable Diffusion into P:\StableDiffusion\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai... Cloning into 'P:\StableDiffusion\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai'... remote: Enumerating objects: 580, done. remote: Counting objects: 100% (571/571), done. remote: Compressing objects: 100% (306/306), done. remote: Total 580 (delta 278), reused 446 (delta 247), pack-reused 9 Receiving objects: 100% (580/580), 73.44 MiB | 23.75 MiB/s, done. Resolving deltas: 100% (278/278), done. Cloning Stable Diffusion XL into P:\StableDiffusion\stable-diffusion-webui-directml\repositories\generative-models... Cloning into 'P:\StableDiffusion\stable-diffusion-webui-directml\repositories\generative-models'... remote: Enumerating objects: 941, done. remote: Total 941 (delta 0), reused 0 (delta 0), pack-reused 941 Receiving objects: 100% (941/941), 43.85 MiB | 21.95 MiB/s, done. Resolving deltas: 100% (490/490), done. Cloning K-diffusion into P:\StableDiffusion\stable-diffusion-webui-directml\repositories\k-diffusion... Cloning into 'P:\StableDiffusion\stable-diffusion-webui-directml\repositories\k-diffusion'... remote: Enumerating objects: 1345, done. remote: Counting objects: 100% (1345/1345), done. remote: Compressing objects: 100% (434/434), done. remote: Total 1345 (delta 944), reused 1264 (delta 904), pack-reused 0 Receiving objects: 100% (1345/1345), 239.04 KiB | 2.84 MiB/s, done. Resolving deltas: 100% (944/944), done. Cloning BLIP into P:\StableDiffusion\stable-diffusion-webui-directml\repositories\BLIP... Cloning into 'P:\StableDiffusion\stable-diffusion-webui-directml\repositories\BLIP'... remote: Enumerating objects: 277, done. remote: Counting objects: 100% (165/165), done. remote: Compressing objects: 100% (30/30), done. remote: Total 277 (delta 137), reused 136 (delta 135), pack-reused 112 Receiving objects: 100% (277/277), 7.03 MiB | 19.68 MiB/s, done. Resolving deltas: 100% (152/152), done. Installing requirements Installing onnxruntime-directml no module 'xformers'. Processing without... no module 'xformers'. Processing without... No module 'xformers'. Proceeding without it. P:\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead. rank_zero_deprecation( Launching Web UI with arguments: --skip-torch-cuda-test --use-directml --reinstall-torch ONNX: version=1.18.0 provider=DmlExecutionProvider, available=['DmlExecutionProvider', 'CPUExecutionProvider'] ============================================================================== You are running torch 2.0.0+cpu. The program is tested to work with torch 2.1.2. To reinstall the desired version, run with commandline flag --reinstall-torch. Beware that this will cause a lot of large files to be downloaded, as well as there are reports of issues with training tab on the latest version. Use --skip-version-check commandline argument to disable this check. ============================================================================== [...] proceeds to start correctly ```

After this - it launched, downloaded models, launched and opened web ui.

Steps to reproduce the problem

Everything stated above, simply fresh install.

What should have happened?

Fresh install should use correct torch versions.

What browsers do you use to access the UI ?

Mozilla Firefox

Sysinfo

sysinfo-2024-05-22-18-03.json

Console logs

I've added them in first step in collapsed block

Additional information

No response

CS1o commented 1 month ago

Nothing wrong here.

This repo is a Fork of Automatic1111. That means it will always install the latest compatible Torch Version that works for Nvidia GPUs. As stated in the Readme (maybe not clearly enough) are the Launch arguments that are needed for the first run to use this webui.

So you always need to launch the webui with either: --use-directml (that will install the latest torch version for AMD directml/onnx/cpu) or --use-zluda (in this case it will download the cuda torch version made for nvidia and needed for zluda)

For anyone who launched the webui-user.bat without editing it like above here is an easy fix: Open up a CMD and run: pip cache purge that will remove the already cached cuda torch files, so that you can start with the launch args correctly again.

cavalia88 commented 3 weeks ago

I'm getting a similar error message as well. I followed the guide from this webpage: https://www.stablediffusiontutorials.com/2024/01/run-stable-diffusion-on-amd-gpu.html , but encountered a similar error.

Appreciate any advise on how to overcome this.

(sd_olive) C:\Users\Admin\sd-test\stable-diffusion-webui-directml>webui-user.bat --use-directml venv "C:\Users\Admin\sd-test\stable-diffusion-webui-directml\venv\Scripts\Python.exe" Python 3.10.6 | packaged by conda-forge | (main, Oct 24 2022, 16:02:16) [MSC v.1916 64 bit (AMD64)] Version: v1.9.3-amd-25-g73a4e8c0 Commit hash: 73a4e8c03c897e1af83909d7218fc6a092189eec Traceback (most recent call last): File "C:\Users\Admin\sd-test\stable-diffusion-webui-directml\launch.py", line 48, in main() File "C:\Users\Admin\sd-test\stable-diffusion-webui-directml\launch.py", line 39, in main prepare_environment() File "C:\Users\Admin\sd-test\stable-diffusion-webui-directml\modules\launch_utils.py", line 589, in prepare_environment raise RuntimeError( RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check Press any key to continue . . .

cavalia88 commented 3 weeks ago

After doing further searches, I found the solution here: https://www.reddit.com/r/StableDiffusion/comments/18vmkon/comment/khpbnvo/

Edit webui-user.bat and add this command line argument: "set COMMANDLINE_ARGS= --skip-torch-cuda-test --use-directml --reinstall-torch".

After running the webui-user.bat with those arguments for the very first time, you can remove subsequently the argument --reinstall-torch.

So in your webui-user.bat file the line should be "set COMMANDLINE_ARGS= --skip-torch-cuda-test --use-directml"

CS1o commented 3 weeks ago

I wouldnt recommend anyone using the Olive/ONNX Version. Better use ZLUDA.

For anyone who wants to install Stable-Diffusion-Webui-amdgpu with DirectML or Zluda or any other Webui please follow my install Guides from here:

https://github.com/CS1o/Stable-Diffusion-Info/wiki/Installation-Guides