Closed imamqaum1 closed 6 months ago
Inside the webui_user.bat:
set COMMANDLINE_ARGS=--medvram --precision full --no-half --no-half-vae --opt-split-attention-v1 --opt-sub-quad-attention --disable-nan-check
set SAFETENSORS_FAST_GPU=1
Works for me with a 6800XT card (16GB). I can now actually generate stuff over 512*512 without it immediately crashing. Still get some issues, but I can generate 10s of images before it even thinks of being weird.
Any of the other commandline args I see other people use make the program completely hang and refuse to generate anything, so if you have the same card as I do, just use what I put in the codeblock ;)
After the token merging update you pretty much have to set token merging to about 0.5 and negative segma to about 3 (Optimizations tab in Options). Gives a great boost in performance and memory efficency without sacrificing much. But you can't use the subquadratic optimization with token merging.
This seems to break my install; lots of black images.
On Tue, Jul 18, 2023 at 7:01 PM Miraihi @.***> wrote:
After the token merging update you pretty much have to set token merging to about 0.5 and negative segma to about 3 (Optimizations tab in Options). Gives a great boost in performance and memory efficency without sacrificing much.
— Reply to this email directly, view it on GitHub https://github.com/lshqqytiger/stable-diffusion-webui-directml/issues/38#issuecomment-1641096569, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA7COGLQOJOYKGVY7KGYTYTXQ4IUTANCNFSM6AAAAAAVW7YLEU . You are receiving this because you commented.Message ID: @.*** com>
This doesn't seem to be helping much. I have less crashes, but more empty black images. I have a 6700XT (12GB) if that helps explain it.
On Tue, Jul 18, 2023 at 6:58 PM Eleiyas @.***> wrote:
Inside the webui_user.bat:
set COMMANDLINE_ARGS=--medvram --precision full --no-half --no-half-vae --opt-split-attention-v1 --opt-sub-quad-attention --disable-nan-check set SAFETENSORS_FAST_GPU=1
Works for me with a 6800XT card (16GB). I can now actually generate stuff over 512*512 without it immediately crashing. Still get some issues, but I can generate 10s of images before it even thinks of being weird.
Any of the other commandline args I see other people use make the program completely hang and refuse to generate anything.
— Reply to this email directly, view it on GitHub https://github.com/lshqqytiger/stable-diffusion-webui-directml/issues/38#issuecomment-1641094334, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA7COGJJVWXKNZCH5YNJHWLXQ4IINANCNFSM6AAAAAAVW7YLEU . You are receiving this because you commented.Message ID: @.*** com>
@Grathew I mentioned that you can't use sub-quad-attention with token merging. Choose Doggetx or V1. If you're not aware, only one of --opt
arguments can be active at the time. Having several of them just keep only one of them active and others inactive.
I reinstalled the new version of AMD DRIVER, and the pictures can appear normally with a resolution of 960X540 and my graphics card RX5500XT 4GB. But it is almost impossible to use the Hires. fix function. How to set it?
I just fix mine using this arg(set COMMANDLINE_ARGS= --precision full --no-half --no-half-vae --opt-split-attention-v1 --opt-sub-quad-attention --disable-nan-check) I'm been trying to generate for a couple days with out any progress, after adding the arg , i now can generate 512x512 hire fix 1024x1024(upscale by 2)8 count, up to 50 step, using 3 control net no problem, im using the latest amd driver 23.9.3,lates chipset, spec( windows 11,cpu amd 5800x-gpu asus dual 6700xt oc- 32g ram) controlnet 1.1.410! a1111 fork from lshqqytiger , check point 1.5, 2.0, 2.1(sdxl no luck, still testing) hope it help
First - the arguments. Second - not sure what's the maximum resolution your GPU is capable of. I can generate a maximum of 600x800 on my RX 580 (8Gb) with arguments
--medvram --precision full --no-half --no-half-vae --opt-split-attention-v1 --opt-sub-quad-attention --disable-nan-check
.
I am having a similar issue. I have an RX 580 that has 8gb of vram, and 2x16gb ram. About 5 days ago, I was still able to generate images well above 768x512, and can even upscale it to around... 4x I believe, no issues at all. But all of a sudden, yesterday, it just stopped working, claiming that I don't have enough GPU video memory available. I tried uninstalling everything, (python 3.10.6, git, and stable diffusion), and then reinstalled everything. It still didn't work. I'm really hoping that this isn't a graphics card problem, which I think it really isn't because, I can run triple A games pretty smoothly without crashes or anything, so maybe it has something to do with Stable Diffusion's latest updates and all that.
i have rx580/8gb and 2x8 ram, tried arguments mentioned before and it works kinda well for me (at least i can generate 600x800 now, in the past i was getting an error every 2-3 512x512 images and on 600x800 it was straight error). Also im using official SD.NEXT if its important
same error with memory allocation. No way to chunk this data?
I can't find the webui-user.bat file
First - the arguments. Second - not sure what's the maximum resolution your GPU is capable of. I can generate a maximum of 600x800 on my RX 580 (8Gb) with arguments
--medvram --precision full --no-half --no-half-vae --opt-split-attention-v1 --opt-sub-quad-attention --disable-nan-check
.
Thank you!! Got me up & running on my AMD RX 6600 (finally)
my 6800 ,win 11 pro, 22H2 Adrenalin Edition 23.4.1
1, it is important for me - folder SD is in the root of drive C 2 .Open CMD in the root of the directory stable-diffusion-webui-directml.
git pull to ensure latest update pip install -r requirements.txt
<- it was at this point I knew I effed up during initial setup because I saw several missing items getting installed. 3 For the webui-user.bat file, I added the following line set COMMANDLINE_ARGS=--medvram --precision full --no-half --no-half-vae --opt-sub-quad-attention --opt-split-attention --opt-split-attention-v1 --disable-nan-check --autolaunch
result 1024*1024
euler a ---- MAX 26/26 [01:16<00:00, 2.96s/it] dpm++2m karras -----MAX 26/26 [02:19<00:18, 6.05s/it]
with my trained model .cpkl
model deliberate_v2 .safetensors 1024x1280 DPM++2m Karras ----- max 26/26 [01:50<00:00, 4.24s/it]
I usually generate 440 * 640, 4 pictures each and then the necessary upscale from Topaz Photo AI
Good luck
p.s. 1280*1280 RuntimeError: Could not allocate tensor with 377487360 bytes. There is not enough GPU video memory available! -)))
So I went ahead and tried this solution, and it was after I did "pip install -r requirements.txt" step when things went wrong for me. Now whenever I run webui-user.bat it spits out this:
venv "M:\Program Files\Stable Diffusion\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.7.0
Commit hash: cfa6e40e6d7e290b52940253bf705f282477b890
Traceback (most recent call last):
File "M:\Program Files\Stable Diffusion\stable-diffusion-webui-directml\launch.py", line 48, in <module>
main()
File "M:\Program Files\Stable Diffusion\stable-diffusion-webui-directml\launch.py", line 39, in main
prepare_environment()
File "M:\Program Files\Stable Diffusion\stable-diffusion-webui-directml\modules\launch_utils.py", line 560, in prepare_environment
raise RuntimeError(
RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
Press any key to continue . . .
i followed this tutorial : https://www.youtube.com/watch?v=mKxt0kxD5C0&t=1087s&ab_channel=FE-Engineer
and then added For the webui-user.bat file, I added the following line set COMMANDLINE_ARGS=--use-directml --medvram --precision full --no-half --no-half-vae --opt-split-attention-v1 --opt-sub-quad-attention --disable-nan-check
when i add --medvram --precision full --no-half --no-half-vae --opt-split-attention-v1 --opt-sub-quad-attention --disable-nan-check prompt its only working in 1.5 models but not in XL models.Adding the prompt speeds up the generation significantly.but losing the xl models.
rx 6800 gpu
added both and nothing 🤷, my gpu is a rx 6600
U fixed this?
For me it was it the simple combo of adding --medvram to the bat file and checking the LVRAM box in Controlnet. I installed Controlnet last night. Come morning I was getting the OP's error. This worked.
Ryzen 3600, RX580,16G.
Does a low-memory graphics card can running only on the CPU? AMD RX 550 2G set COMMANDLINE_ARGS=--use-directml --lowvram --opt-split-attention --enable-insecure-extension-access --skip-torch-cuda-test
Is there an existing issue for this?
What happened?
Stable diffusion crash, after generating some pixel and appear error : Could not allocate tensor with 377487360 bytes. There is not enough GPU video memory available!
Steps to reproduce the problem
What should have happened?
Stable diffusion running normally, and generating some image
Commit where the problem happens
RuntimeError: Could not allocate tensor with 377487360 bytes. There is not enough GPU video memory available!
What platforms do you use to access the UI ?
Windows
What browsers do you use to access the UI ?
Microsoft Edge
Command Line Arguments
List of extensions
No
Console logs
Additional information
RX 570 4GB Ryzen 5 3500 RAM 8GB single channel Driver AMD Software PRO Edition DirectX 12