lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
8.45k stars 825 forks source link

"Torch is not able to use GPU" #369

Open 303Aki303 opened 8 months ago

303Aki303 commented 8 months ago

Checklist

What happened?

I downloaded and installed everything, but it keeps telling me "Torch is not able to use GPU". I'm on AMD windows so I tried --directml and then --use-directml but still no luck, how can I fix this?

Steps to reproduce the problem

open the webui-user.bat

What should have happened?

it should've opened

What browsers do you use to access the UI ?

Mozilla Firefox, Google Chrome, Brave, Microsoft Edge

Sysinfo

can't open the webui

Console logs

venv "D:\Forge\stable-diffusion-webui-forge\venv\Scripts\Python.exe"
Python 3.10.13 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:24:38) [MSC v.1916 64 bit (AMD64)]
Version: f0.0.15v1.8.0rc-latest-213-geacb14e1
Commit hash: eacb14e1157084c4bae01a6dc65a01f849408b2b
Traceback (most recent call last):
  File "D:\Forge\stable-diffusion-webui-forge\launch.py", line 51, in <module>
    main()
  File "D:\Forge\stable-diffusion-webui-forge\launch.py", line 39, in main
    prepare_environment()
  File "D:\Forge\stable-diffusion-webui-forge\modules\launch_utils.py", line 431, in prepare_environment
    raise RuntimeError(
RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
Press any key to continue . . .

Additional information

thanks in advance foe any helps and tips

usernamele31 commented 8 months ago

I'm going through the AMD on Windows experience right now too :(

try adding the "--skip-torch-cuda-test" to the COMMANDLINE_ARGS line you added -directml to?

from there, you can try to follow the instructions scattered around this thread, because you'll probably still be getting errors: https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/58 - it's quite a bit of work though, so here's tl;dr that worked for me:

comment out all the @torch.inference_mode() (add # before them) on: \ldm_patched\modules\utils.py - line 407 \modules_forge\forge_loader.py - line 236, line 242

change "with torch.inference_mode():" for "with torch.no_grad():" on: \modules\processing.py - line 817

then random number generation source from GPU to CPU in the webui settings, and try restarting your computer - that should get things working for you, hopefully!

mlsterpr0 commented 8 months ago

comment out all the @torch.inference_mode() (add # before them) on: \ldm_patched\modules\utils.py - line 407 \modules_forge\forge_loader.py - line 236, line 242 change "with torch.inference_mode():" for "with torch.no_grad():" on: \modules\processing.py - line 817

then random number generation source from GPU to CPU in the webui settings, and try restarting your computer - that should get things working for you, hopefully!

ok, now it kinda works. But there is new problem, it eats ALL vram (and freezes because of that) even when generating small 512x512 image. Something is not right. After so much time there's no proper tutorial how to use forge on AMD? is that right?

krasteriii commented 7 months ago

I also get this error and it's an Nvidia GPU.