AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
140.39k stars 26.57k forks source link

[Bug]: RuntimeError: Torch is not able to use GPU - RTX 2070 Windows 11 #11405

Open CCesternino opened 1 year ago

CCesternino commented 1 year ago

Is there an existing issue for this?

What happened?

I am having an issue now where torch has stopped recognizing the GPU, I can no longer run Automatic 1111 and do not understand why

I have uninstalled and reinstalled Automatic, GIT, Python, and the CUDA files, and still get this issue, deleted the venv folder, updated pip and still no further. I updated the Graphics drivers and restarted the PC multiple times.

Steps to reproduce the problem

  1. Open webui-user.bat
  2. process loads then error report
  3. process closes

What should have happened?

Torch should have been able to use the GPU

Commit where the problem happens

baf6946e06249c5af9851c60171692c44ef633e0

What Python version are you running on ?

Python 3.10.x

What platforms do you use to access the UI ?

Windows

What device are you running WebUI on?

Nvidia GPUs (RTX 20 above)

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

--xformers

List of extensions

No

Console logs

venv "M:\AI\A1111\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.3.2
Commit hash: baf6946e06249c5af9851c60171692c44ef633e0
Traceback (most recent call last):
  File "M:\AI\A1111\stable-diffusion-webui\launch.py", line 38, in <module>
    main()
  File "M:\AI\A1111\stable-diffusion-webui\launch.py", line 29, in main
    prepare_environment()
  File "M:\AI\A1111\stable-diffusion-webui\modules\launch_utils.py", line 257, in prepare_environment
    raise RuntimeError(
RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
Press any key to continue . . .

Additional information

nvidia-smi : NVIDIA-SMI 536.23 Driver Version: 536.23 CUDA Version: 12.2 NVIDIA GeForce RTX 2070

nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2022 NVIDIA Corporation Built on Wed_Sep_21_10:41:10_Pacific_Daylight_Time_2022 Cuda compilation tools, release 11.8, V11.8.89 Build cuda_11.8.r11.8/compiler.31833905_0

print(torch.version.cuda) : 11.8

torch.cuda.is_available(): False

torch.zeros(1).cuda(): Traceback (most recent call last): File "", line 1, in File "M:\AI\A1111\stable-diffusion-webui\venv\lib\site-packages\torch\cuda__init__.py", line 247, in _lazy_init torch._C._cuda_init() RuntimeError: No CUDA GPUs are available

Algordinho commented 1 year ago

Same problem here, Torch cant access GPU and then I get [RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' in SD

missionfloyd commented 1 year ago

and the CUDA files

Do you mean replacing the DLLs in the torch folder? You don't need to do that anymore.

CCesternino commented 1 year ago

No I mean just reinstalling torch, based on the install requirements of Auto 1111

M4X1K02 commented 1 year ago

Apparently in the launch_utils.py file, this function always returns false:

def check_run_python(code: str) -> bool:
    result = subprocess.run([python, "-c", code], capture_output=True, shell=False)
    return result.returncode == 0

reason is that somehow the keyword 'python' is not recognized, but putting it into a string solved the issue:

def check_run_python(code: str) -> bool:
    result = subprocess.run(["python", "-c", code], capture_output=True, shell=False)
    return result.returncode == 0