lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.68k stars 175 forks source link

[Bug]: 7800 xt ( RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check ) #351

Closed Dervlex closed 5 months ago

Dervlex commented 6 months ago

Checklist

What happened?

Error while starting Automatic1111. I had it run on my RX6800, but on my 7800 XT it dont want to work. I tried everything.

Yes on Linux it works, but i want to use it in windows... Any idea ?

Steps to reproduce the problem

  1. Reinstall
  2. Different pytorch versions
  3. Clean Driver Install

What should have happened?

That Automatic1111 starts.

What browsers do you use to access the UI ?

No response

Sysinfo

Not possible.

Console logs

File "Y:\stable deiff\stable-diffusion-webui\launch.py", line 48, in <module>
    main()
  File "Y:\stable deiff\stable-diffusion-webui\launch.py", line 39, in main
    prepare_environment()
  File "Y:\stable deiff\stable-diffusion-webui\modules\launch_utils.py", line 384, in prepare_environment
    raise RuntimeError(
RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
Drücken Sie eine beliebige Taste . . .

Additional information

No response

lshqqytiger commented 6 months ago

335 Add --use-directml.

Dervlex commented 6 months ago

335 Add --use-directml.

I tried this. Same error. I tried also to add the "torch-directml" into the requirements.txt. but not sure if it was right.

I add it one time behind the normal "torch", deleted then the venv folder and started the web-user.bat.

This doesnt work.

After that i changed the "torch" back to normal and added the "torch-directml" on the bottom. Worked also not.

Everytime same issue.

Any idea ?

computerex commented 6 months ago

Same issue on RX6600, Windows 11.

venv "C:\projects\stable-diffusion-webui-directml\venv\Scripts\Python.exe" fatal: No names found, cannot describe anything. Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: 1.7.0 Commit hash: cfa6e40e6d7e290b52940253bf705f282477b890 Traceback (most recent call last): File "C:\projects\stable-diffusion-webui-directml\launch.py", line 48, in main() File "C:\projects\stable-diffusion-webui-directml\launch.py", line 39, in main prepare_environment() File "C:\projects\stable-diffusion-webui-directml\modules\launch_utils.py", line 560, in prepare_environment raise RuntimeError( RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check Press any key to continue . . .

disbox19 commented 6 months ago

add torch-directml in requirements_versions.txt then .\venv\scripts\activate pip install -r requirements.txt if error again pip install httpx==0.24.1

Drael64 commented 6 months ago

335 Add --use-directml.

I have the same problem. It was working until I updated. Now it gives this error, and if I put in '--use-directml' it returns AttributeError: module 'torch' has no attribute 'dml'

I did a fresh install because updating broke my old copy. Didn't seem to help either. Installing it originally was easy. Strange.

Drael64 commented 6 months ago

335 Add --use-directml.

I have the same problem. It was working until I updated. Not it gives this error, and if I put in '--use-directml' it returns AttributeError: module 'torch' has no attribute 'dml'

I did a fresh install because updating broke my old copy.

add torch-direct in requirements_versions.txt then .venv\scripts\activate pip install -r requirements.txt if error again pip install httpx==0.24.1

I think you are missing an ml at the end. I did these recommended steps, added --reinstall torch (sp?) or similar at the end of the arguments and with a fresh install that seems to have worked. Basically it was a combination of the recommended fixes. If anyone else is experiencing this issue they should look around the bug reports for the few recent entries like this one, look at everyones suggestions. You may be like me and need to apply a couple of them to get the GPU recognized again.

snakeedger commented 6 months ago

335 Add --use-directml.

I have the same problem. It was working until I updated. Not it gives this error, and if I put in '--use-directml' it returns AttributeError: module 'torch' has no attribute 'dml' I did a fresh install because updating broke my old copy.

add torch-direct in requirements_versions.txt then .venv\scripts\activate pip install -r requirements.txt if error again pip install httpx==0.24.1

I think you are missing an ml at the end. I did these recommended steps, added --reinstall torch (sp?) or similar at the end of the arguments and with a fresh install that seems to have worked. Basically it was a combination of the recommended fixes. If anyone else is experiencing this issue they should look around the bug reports for the few recent entries like this one, look at everyones suggestions. You may be like me and need to apply a couple of them to get the GPU recognized again.

Made an account just to thank the both of you, all of this got it working again on my end.

dykoka commented 6 months ago

torch-directml

did you find a solution i have the exact problem as you on the same amd

Dervlex commented 6 months ago

torch-directml

did you find a solution i have the exact problem as you on the same amd

I will answer the days. My friend has not time yet to try it. Maybe tomorrow. If i get it to work then, i will write.

Xinterp6196 commented 6 months ago

add torch-directml in requirements_versions.txt then .\venv\scripts\activate pip install -r requirements.txt if error again pip install httpx==0.24.1

You sir are a saint. Thank you VERY much for this fix!

Dervlex commented 6 months ago

Actually i tried everything.... grafik grafik grafik grafik grafik

tried everything.

also the start argument: --use-directml

Vexillen commented 6 months ago

tried everything.

also the start argument: --use-directml

Hey, I was gonna open up the exact same issue. I managed to make it work by following the recommandations here, so I just wanted tell you what I did exactly. This topic helped me so I hope this'll help you.

First of all, I don't know if it's related but I noticed that, in the pictures you posted your folder named like this. "stable-diffusion-webui" But my own stable diffusion folder is named like this. "stable-diffusion-webui-directml" I think it should be like mine. Since we use the directml, right?

Are you sure you downloaded the right version? Maybe you followed an outdated youtube tutorial or something?

https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs I did this one.

1- First part I did the same. Added Torch-directml in requirements_versions.txt

2- I couldn't managed to do the second part. I didn't know how to run the .\venv\scripts\activate pip install -r requirements.txt command(I'm new to this stuff). a-) So instead I just opened up the main folder. (D:\stable-diffusion-webui-directml) b-) Clicked on the file url from top. Typed Cmd. c-) Command prompt opened, then I just did "pip install -r requirements.txt". It installed some stuff, after that it still was giving the error.

3- Then I edited webui-user.bat added the command line --use-directml. Ran it. Instead of the error this time, it started to install some new stuff and reinstalled the torch at the end.

It worked after that. I didn't do clean reinstall or anything like that.

Karg4 commented 6 months ago

tried everything. also the start argument: --use-directml

Hey, I was gonna open up the exact same issue. I managed to make it work by following the recommandations here, so I just wanted tell you what I did exactly. This topic helped me so I hope this'll help you.

First of all, I don't know if it's related but I noticed that, in the pictures you posted your folder named like this. "stable-diffusion-webui" But my own stable diffusion folder is named like this. "stable-diffusion-webui-directml" I think it should be like mine. Since we use the directml, right?

Are you sure you downloaded the right version? Maybe you followed an outdated youtube tutorial or something?

https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs I did this one.

1- First part I did the same. Added Torch-directml in requirements_versions.txt

2- I couldn't managed to do the second part. I didn't know how to run the .\venv\scripts\activate pip install -r requirements.txt command(I'm new to this stuff). a-) So instead I just opened up the main folder. (D:\stable-diffusion-webui-directml) b-) Clicked on the file url from top. Typed Cmd. c-) Command prompt opened, then I just did "pip install -r requirements.txt". It installed some stuff, after that it still was giving the error.

3- Then I edited webui-user.bat added the command line --use-directml. Ran it. Instead of the error this time, it started to install some new stuff and reinstalled the torch at the end.

It worked after that. I didn't do clean reinstall or anything like that.

I had to dig for my never used github password just to thank you. I have followed these steps and it all works well now. The only issue I had was that I had a new error while trying to use img2img (Not enough GPU memory available, and I fixed it by using extra commands) This is how my user bat file looks now and works for AMD system using 6650XT)

@echo off

set PYTHON= set GIT= set VENV_DIR= set COMMANDLINE_ARGS= --use-directml --medvram --no-half --precision full --no-half-vae --opt-sub-quad-attention --opt-split-attention-v1 --autolaunch

call webui.bat

ArchAngelAries commented 6 months ago

Followed all the fixes here and realized something changed in the way directml argument is implimented, it used to be "--backend=directml" but now the working commandline arg for directml is "--use-directml", took me a hot second because I was telling myself I already had the command arg set, but then upon comparing word for word it was indeed changed.

srt-jay commented 6 months ago

add torch-directml in requirements_versions.txt then .\venv\scripts\activate pip install -r requirements.txt if error again pip install httpx==0.24.1

THANK YOU!

Gutwql commented 5 months ago

According to https://github.com/lshqqytiger/stable-diffusion-webui-directml/issues/335, add --use-directml to the settings.

Set COMMANDLINE_ARGS=--opt-sub-quad-attention --lowvram --disable-nan-check --use-directml

For Windows users with AMD CPU 7900x and AMD GPU rx6700, webui-user.bat should run perfectly.

ZacBouh commented 4 months ago

Followed the instructions but now got this error : image

I checked and the file is there, so it seems the error come from the OS not loading it correctly Followed every step Tried on a fresh install but still i get the same error

lshqqytiger commented 4 months ago

Please leave a comment to the update note or open a new discussion about ZLUDA. This issue is about DirectML and closed.

riccorohl commented 3 months ago

Keep in mind that the line "--use-directml" has to be next to COMMANDLINE_ARGS=, as shown below. I was having a difficult time, but I ended up piecing it together. I guess it makes sense, but someone might miss it.

set PYTHON= set GIT= set VENV_DIR= set COMMANDLINE_ARGS= --use-directml

call webui.bat