lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.68k stars 175 forks source link

[Bug]: New Install--- RuntimeError: Torch is not able to use GPU #340

Closed enleo0214 closed 4 months ago

enleo0214 commented 6 months ago

Checklist

What happened?

412596768_7270923466273846_5145939658649481593_n

Steps to reproduce the problem

https://community.amd.com/t5/ai/updated-how-to-running-optimized-automatic1111-stable-diffusion/ba-p/630252 by this tutorial

What should have happened?

.

What browsers do you use to access the UI ?

Google Chrome

Sysinfo

.

Console logs

.

Additional information

Deleted venv and tmp Completely deleted Stable Diffusion WebUI and recloned Reinstalled python 3.10.6 --use-directml --skip-torch-cuda-test

all of them didn't work :(

Synergyart commented 6 months ago

Hey there! The same thing happened to me, this is what fixed the problem:

-Clone SD again on a different folder. -Open and add on 'requirements_versions.txt' the following: torch-directml. -On webui-user add '--use-directml' on set COMMANDLINE_ARGS= -Run webui-user.

It should continue with the install without problems.

Foxtrot-Uniform42 commented 6 months ago

Hey there! The same thing happened to me, this is what fixed the problem:

-Clone SD again on a different folder. -Open and add on 'requirements_versions.txt' the following: torch-directml. -On webui-user add '--use-directml' on set COMMANDLINE_ARGS= -Run webui-user.

It should continue with the install without problems.

fixed issue for me too

cesarcwb98 commented 6 months ago

Hey there! The same thing happened to me, this is what fixed the problem:

-Clone SD again on a different folder. -Open and add on 'requirements_versions.txt' the following: torch-directml. -On webui-user add '--use-directml' on set COMMANDLINE_ARGS= -Run webui-user.

It should continue with the install without problems.

fixed for me too

lshqqytiger commented 6 months ago

https://github.com/lshqqytiger/stable-diffusion-webui-directml/discussions/334#discussioncomment-7936839

sensitive-squid commented 6 months ago

Hey there! The same thing happened to me, this is what fixed the problem:

-Clone SD again on a different folder. -Open and add on 'requirements_versions.txt' the following: torch-directml. -On webui-user add '--use-directml' on set COMMANDLINE_ARGS= -Run webui-user.

It should continue with the install without problems.

This didn't work for me. When I did that I got an error, do you see something that I'm doing wrong? Screenshot 2024-01-01 061155

lshqqytiger commented 6 months ago

Python 3.11 is not supported. Downgrade to <= 3.10

micbusin commented 6 months ago

Running webui-user.bat one time with _set COMMANDLINEARGS=--use-directml --update-check --update-all-extensions --reinstall-xformers --reinstall-torch solved it. After this you can run it with set COMMANDLINE_ARGS=--use-directml. Or you can keep update options. Without anything like skipcuda, nohalf, medvram, etc RX 6600 2.2it/s on SD1.5 models. You can list args with python launch.py --help.

MonsterAlex commented 6 months ago

Hey there! The same thing happened to me, this is what fixed the problem:

-Clone SD again on a different folder. -Open and add on 'requirements_versions.txt' the following: torch-directml. -On webui-user add '--use-directml' on set COMMANDLINE_ARGS= -Run webui-user.

It should continue with the install without problems.

fixed for me too

dlasher commented 5 months ago

AMD support appears to be broken badly under both linux and windows at the moment, fresh OS install, fresh GIT install.

RuntimeError: Torch is not able to use GPU

Hoping this gets addressed.

rvxfahim commented 4 months ago

Running webui-user.bat one time with _set COMMANDLINEARGS=--use-directml --update-check --update-all-extensions --reinstall-xformers --reinstall-torch solved it. After this you can run it with set COMMANDLINE_ARGS=--use-directml. Or you can keep update options. Without anything like skipcuda, nohalf, medvram, etc RX 6600 2.2it/s on SD1.5 models. You can list args with python launch.py --help.

did not fix for me

micbusin commented 4 months ago

You can try pip uninstall torch tochvision then pip install torch-directml . I had enough from win and webui, I moved to fedora39 + rocm + comfyui. Now I can run 1024x1024 SDXL generation with a 8GB Radeon6600. So A1111, sdwebui is not an option for me anymore. If you can't fix it I suggest you to do the same. Fedora 40 will have rocm6 out of box. Ubuntu freezed many times because of oom, fedora also unloads models to ram and can reach 100% ram usage, but instead of freezing it just slows down for a while. This happens only when I'm using faceswapping addons with comfy, normal generation always runs smooth.

TigermanUK commented 4 months ago

I had my rx580 on win10 updated from 1.4.0 to 1.7.0 and now only the CPU was being used which is about 15x slower than the previously working GPU. The fixes listed here worked from me as well as another comment above. Thanks guys... 1) As Synergyart commented above

Hey there! The same thing happened to me, this is what fixed the problem:

-Clone SD again on a different folder. -Open and add on 'requirements_versions.txt' the following: torch-directml. -On webui-user add '--use-directml' on set COMMANDLINE_ARGS=

I also had to add the line in webui-user.bat '--reinstall-torch' onto set COMMANDLINE_ARGS=

after the torch requirement was updated i removed the '--reinstall-torch' otherwise you get the warning every time requirement already met, you don't need to keep seeing this.

Finally I now set my set COMMANDLINE_ARGS= to the following in webui-user.bat

set COMMANDLINE_ARGS=--medvram --no-half --disable-nan-check --precision full --use-directml --opt-split-attention

Although from what I am reading some of these command line options may be set in settings in future/now. But its working for me.. The plus side is iterations per sec went from 1 per second to 1.7its per second. The downsize the vram is less and less so the dimension you can use keep shrinking. Prior to 1.4 I could the width and height to 512x512 and it would only run out of vram with too many lora. After v1.4 if I set the width x height above 464x512 it can't allocate enough mem. Now with V1.7.0 it looks like I can barely go over edit: (400x480) or results in allocate out of vram error. So I can't load up old projects that required a bigger starting resolution or as you guys know it will change the output if set width or height different. Just a small note for others who may have similar experiences... Code optimization in the last 8 months looks to be great but at the cost of the vram... If anyone wants to suggest settings that I need to look at are very welcome. Cheers.

dylanh724 commented 2 months ago

Followed these to the "T" on a Zephyrus G14:

RuntimeError: Torch is not able to use GPU

No dice. Does not work on AMD GPU. ( CC @lshqqytiger )

EDIT: To clarify, should I add torch-directml or should I replace torch? I simply added it to the bottom as instructed

lshqqytiger commented 2 months ago

remove venv folder and run these commands:

git stash
git stash clear
git pull
git checkout tags/v1.9.3-amd

Don't modify requirements.txt. Just launch webui with --use-directml