Closed Blaz1kennBG closed 10 months ago
Which GPU/ vendor are u talking about? If you're talking about an AMD-GPU try this (working @7800xt) commandline-args
@echo off
set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=--medvram --precision full --no-half --no-half-vae --opt-split-attention --opt-sub-quad-attention --disable-nan-check
set SAFETENSORS_FAST_GPU=1
git pull
call webui.bat
Do you have dml
under modules
directory?
Do you have
dml
undermodules
directory?
Yes i do have dml
Which GPU/ vendor are u talking about? If you're talking about an AMD-GPU try this (working @7800xt) commandline-args
@echo off set PYTHON= set GIT= set VENV_DIR= set COMMANDLINE_ARGS=--medvram --precision full --no-half --no-half-vae --opt-split-attention --opt-sub-quad-attention --disable-nan-check set SAFETENSORS_FAST_GPU=1 git pull call webui.bat
AMD RX 6700 XT, I would try that now
EDIT: Does not work. CPU is still utilized at 100%. GPU and GPU Memory is barely touched. I did notice a very small 100% GPU Usage spike but it was for a second.
Try again with --backend directml --device-id 0
. (0 is an example. Replace it if you have other cards installed)
Try again with
--backend directml --device-id 0
. (0 is an example. Replace it if you have other cards installed)
Arguments are as following: --no-half --backend directml --device-id 0
however i get an error about a torch_directml
package
Traceback (most recent call last):
File "D:\stable-diffusion-webui-directml\launch.py", line 48, in <module>
main()
File "D:\stable-diffusion-webui-directml\launch.py", line 44, in main
start()
File "D:\stable-diffusion-webui-directml\modules\launch_utils.py", line 476, in start
import webui
File "D:\stable-diffusion-webui-directml\webui.py", line 13, in <module>
initialize.imports()
File "D:\stable-diffusion-webui-directml\modules\initialize.py", line 34, in imports
shared_init.initialize()
File "D:\stable-diffusion-webui-directml\modules\shared_init.py", line 25, in initialize
dml.initialize()
File "D:\stable-diffusion-webui-directml\modules\dml\__init__.py", line 40, in initialize
from modules.dml.backend import DirectML # pylint: disable=ungrouped-imports
File "D:\stable-diffusion-webui-directml\modules\dml\backend.py", line 4, in <module>
import torch_directml # pylint: disable=import-error
ModuleNotFoundError: No module named 'torch_directml'
Edit: Activating the virtual environment and doing pip install torch-directml
worked. The error is gone and supposedly
--backend directml
or --device-id 0
did fix the problem.
However, theres now a memory problem.
RuntimeError: Could not allocate tensor with 9831040 bytes. There is not enough GPU video memory available!
Apologies, MariyanEOD is my other account and i did not notice the difference.
I did a new setup from scratch,
--backend directml --no-half --device-id 0
And voala! Everything works now!
Besides the memory leak from the VRAM whick makes it at about 4-5 images and SD throws the error for not enough memory
Is there an existing issue for this?
What happened?
After the whole guide, i tried to create a picture but i noticed that the CPU was maxing out and the GPU was not moving. I have only one argument set:
set COMMANDLINE_ARGS=--no-half
Steps to reproduce the problem
What should have happened?
Use GPU on txt2img instead of CPU
Sysinfo
What browsers do you use to access the UI ?
Google Chrome
Console logs
Additional information
No errors whatsoever. Just regular creating but with CPU sysinfo-2023-09-09-22-46.txt