Open Ael07 opened 1 month ago
Hey i tested the Webui in CPU mode first by using:
--use-directml --use-cpu all --no-half --opt-sub-quad-attention
(CPU mode gets prefered here, but directmml is needed to get the right torch version)
Then i switched to directml by removing --use-cpu all and used only these:
--use-directml --no-half --opt-sub-quad-attention
Both working normaly. Also: --precision full isn't needed for directml (can be needed for CPU mode) --disable-nan-check won't fix anything and hides errors (totally not recommended) --disable-safe-unpickle is unsafe. better convert your old .ckpt models to .safetensor with the Checkpoint merger tab in the webui.
You should check if it works for you with the directml args from above. And please test with a .safetensor model that is 2gb in size like Dreamshaper v8 model.
Bro your --use-cpu all brought another unfixable error, had to re install everything again. it is running just fine on CPU with these arguments --medvram --no-half --precision full --opt-sub-quad-attention --opt-split-attention-v1 --theme dark --autolaunch --disable-safe-unpickle --disable-nan-check --skip-torch-cuda-test
on GPU i still have the failed to load model error above, when i add --use-directml
I'm running on AMD 7100 firepro 8GB... not a radeon or something more Rocm /Zluda compatible...(which i think you running it on and therefore no issues)
again few months back it was running fine with --use-directml on that same GPU... have no idea what new update made it unfixable
Win10, Python 3.10.6, Radeon RX 580 2048SP.
Have a same trouble. I install stable-diffusion-webui-amdgpu
for the first time, so I could make mistakes that are obvious to folk.
I clone repo:
git clone https://github.com/lshqqytiger/stable-diffusion-webui-amdgpu.git
Edit webui.bat
: added in the top set COMMANDLINE_ARGS= --backend directml
.
Run webui.bat
.
And received error:
After adding --skip-torch-cuda-test
to webui.bat
subj clone several repo, installing requirements and break with error:
launch.py: error: unrecognized arguments: --backend directml
After I remove --backend directml
subject download model https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors
and start browser GUI.
After I set up prompt and click Generate
button receive error RuntimeError: mat1 and mat2 must have the same dtype, but got Float and Half
. It fixed after I added '--no-half' to COMMANDLINE_ARGS. Now it is:
set COMMANDLINE_ARGS= --skip-torch-cuda-test --no-half
.
On this step stable-diffusion-webui-amdgpu
work fine and able to generate images, but using CPU only. GPU utilization is around 0 ang CPU around 100%.
Please suggest how to resolve this, using GPU for inference.
Please do not add --skip-torch-cuda-test
. If your install has no problem, it should be able to launch without --skip-torch-cuda-test
unless you want cpu to run. If you get an error without --skip-torch-cuda-test
, you have done something wrong.
If your card is not NVIDIA, you need to add one of --use-*
arguments.
--use-zluda
: best for decent AMD cards. (RX 6000 series or higher) works for older cards.
--use-directml
: legacy, but supports almost every cards. slower, memory-consuming. inefficient.
--use-ipex
: Intel IPEX
@lshqqytiger , thanks for quick answer.
Step back. What all I did is:
clone repo:
git clone https://github.com/lshqqytiger/stable-diffusion-webui-amdgpu.git
Edit webui.bat
: added in the top set COMMANDLINE_ARGS= --backend directml
.
Run webui.bat.
And received error:
Traceback (most recent call last):
File "f:\StableDiffusion\automatic1111\stable-diffusion-webui-amdgpu\launch.py", line 48, in <module>
main()
File "f:\StableDiffusion\automatic1111\stable-diffusion-webui-amdgpu\launch.py", line 39, in main
prepare_environment()
File "f:\StableDiffusion\automatic1111\stable-diffusion-webui-amdgpu\modules\launch_utils.py", line 589, in prepare_environment
raise RuntimeError(
RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
Win10, Python 3.10.6, Radeon RX 580 2048SP.
Look like all necessary drivers/frameworks installed.
--backend directml
was replaced with --use-directml
.
Just a report. stable-diffusion-webui-amdgpu
work fine both with --use-directml
and CPU inference. Clean install; Win10, Python 3.10.6, Radeon RX 580 2048SP.
what's the full command args you are using? Also are you installing anything else like Rocm or Zluda ?.. your GPU is 2018 mine is 2014 made, it is still a good GPU and --use-directml was working for me too 6x times faster than CPU until an update this year that messed everything up... would be great to find out what causes the error above.
@Ael07
set COMMANDLINE_ARGS= --use-directml
is enough to start working with GPU.
Yes, I installed HIP SDK. But ROCm ver in last HIP is 5.7.1 and AMD drop Polaris support in 4.5. So it not works with my RX580, and I uninstall it. With your GPU it may work well.
About additional soft - last version of Adrenaline driver and Vulkan runtime and SDK (https://vulkan.lunarg.com/sdk/home).
thx for that, ok if i use set COMMANDLINE_ARGS= --use-directml alone, it actually loads the model correctly but then runtime error when trying to generate the image; and of course it gives you a good hint: unspecified error!! :D
Error completing request
Arguments: ('task(7pd8z4j2zpkbqmi)', <gradio.routes.Request object at 0x000001E9672D7430>, 'house', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "C:\Users\y\stable-diffusion-webui-directml\modules\call_queue.py", line 57, in f
res = list(func(*args, kwargs))
File "C:\Users\y\stable-diffusion-webui-directml\modules\call_queue.py", line 36, in f
res = func(args, kwargs)
File "C:\Users\y\stable-diffusion-webui-directml\modules\txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "C:\Users\y\stable-diffusion-webui-directml\modules\processing.py", line 847, in process_images
res = process_images_inner(p)
File "C:\Users\y\stable-diffusion-webui-directml\modules\processing.py", line 1075, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "C:\Users\y\stable-diffusion-webui-directml\modules\processing.py", line 1393, in sample
self.sampler = sd_samplers.create_sampler(self.sampler_name, self.sd_model)
File "C:\Users\y\stable-diffusion-webui-directml\modules\sd_samplers.py", line 41, in create_sampler
sampler = config.constructor(model)
File "C:\Users\y\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 31, in
I have an AMD RX500XT and the stable-diffusion-webui-directml folder in system32. I have the same problem and solved with this:
-Delete the venv folder
-Modify the webui-user.bat with this "set COMMANDLINE_ARGS= --use-directml --opt-sub-quad-attention --no-half --disable-nan-check --autolaunch"
And have to look like this
-Doble click webui-user.bat
Just doble click the webui-user.bat and that's all
Now you can use the gpu for Stable Diffusion
( I have the AMD HIP SDK for Windows installed https://www.amd.com/en/developer/resources/rocm-hub/hip-sdk.html )
Looks like it is working for you, i managed to make it work before without AMD HIP SDK. Right now not sure what happened but is not working anymore with --use-directml... works fine on CPU with a 5min per picture which is so annoying The question is does anyone have the same error as the one i sent last?! or am i the only one lol ... anybody can replicate the error? Thx
i also noticed that --no-half gives me the first error which is model failed to load .. if i delete it, i get the model to load but get the second error. --no-half is supposed to do what exactly? thx
does anybody have this issue or can replicate my error?! thanks
Checklist
What happened?
I re-installed directml stable diffusion from scratch and it is working correctly on CPU, and generating each image in 5min!, as soon as i add --use-directml. it can't load models anymore, the webui is loaded correctly but nothing is running
Steps to reproduce the problem
1 add --use-directml to webui user.bat 2 run webui user bat file on a clean working installation of directml stable diffusion ( works perfectly without --use-directml) 3 webui loads but models fail
What should have happened?
it should have worked via AMD GPU... it only works without --use-directml which is why we are using this version to make it work on GPU AMD rather than CPU!
No idea what i'm doing wrong here!.. it used to work perfectly before on my GPU but then an update happened few months ago that messed up everything.
What browsers do you use to access the UI ?
Google Chrome
Sysinfo
sysinfo-2024-05-26-12-58.json
Console logs
Additional information
No response