lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.78k stars 183 forks source link

black image output #23

Open 1ewig opened 1 year ago

1ewig commented 1 year ago

Is there an existing issue for this?

What happened?

UI boots up without error and after generating an image it comes out as black image

Steps to reproduce the problem

  1. Go to .... web ui
  2. Press .... generate
  3. ... a black image comes out

What should have happened?

output should have been a bit colorful

Commit where the problem happens

commit: 3e855524

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

@echo off

set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=--opt-sub-quad-attention --lowvram --disable-nan-check --autolaunch

call webui.bat

List of extensions

none

Console logs

venv "E:\New folder\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: 3e8555242836192c9e3e79c91962418e1f51d5d6
Installing requirements for Web UI
Launching Web UI with arguments: --opt-sub-quad-attention --lowvram --disable-nan-check --autolaunch
Interrogations are fallen back to cpu. This doesn't affect on image generation. But if you want to use interrogate (CLIP or DeepBooru), check out this issue: https://github.com/lshqqytiger/stable-diffusion-webui-directml/issues/10
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
No module 'xformers'. Proceeding without it.
Loading weights [e04b020012] from E:\New folder\stable-diffusion-webui-directml\models\Stable-diffusion\rpg_V4.safetensors
Creating model from config: E:\New folder\stable-diffusion-webui-directml\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying sub-quadratic cross attention optimization.
Textual inversion embeddings loaded(0):
Model loaded in 42.4s (load weights from disk: 2.5s, create model: 1.8s, apply weights to model: 30.8s, apply half(): 6.5s, load VAE: 0.1s, load textual inversion embeddings: 0.5s).
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [04:45<00:00, 14.29s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [04:41<00:00, 14.07s/it]

Additional information

model im using is rpg v4 and its a safetensor file

Miraihi commented 1 year ago

More often than not --precision full --no-half arguments are required to prevent the black squares. also try --medvram instead of --lowvram, may as well be enough, and also get a sizable speed boost. Also --opt-split-attention-v1 won't hurt.

1ewig commented 1 year ago

More often than not --precision full --no-half arguments are required to prevent the black squares. also try --medvram instead of --lowvram, may as well be enough, and also get a sizable speed boost. Also --opt-split-attention-v1 won't hurt.

All right,,, output is not black image now ty, but I cant generate more than once without restarting the web ui, like 1 picture in 1 run, help me twice eh

ostionig commented 1 year ago

I have the same problem as you, even after adding parameters --precision full --no-half, it still can't be solved, is there any other solution?

lshqqytiger commented 1 year ago

Most of black images are due to --opt-sub-quad-attention option. --opt-sub-quad-attention makes speed faster and lowers vram occupation (then you can generate high-resolution images). But when you add it to commandline arguments, black images can be generated. The only known solution is to keep generating black images until they are not generated. (I have kept generating images to debug it, and at some point, I checked that the images were generated properly)

Miraihi commented 1 year ago

On RX580 (8Gb) black images do appear, but very rarely (1 per 100 generations or even less). I use the whole set of arguments including --opt-sub-quad-attention.

sooxt98 commented 1 year ago

I just remove --opt-sub-quad-attention and it works now. I use --medvram for faster generation on my RX6600XT

sundevista commented 1 year ago

@Miraihi I have RX580 too, can you share all arguments you use to launch app?

Miraihi commented 1 year ago

@Miraihi I have RX580 too, can you share all arguments you use to launch app?

Ones that do matter are --medvram --precision full --no-half --no-half-vae --opt-split-attention-v1 --opt-sub-quad-attention --disable-nan-check The maximum picture size you can generate is 800x600. When generating 800x600 I'm getting 2-3 seconds per iteration.

sundevista commented 1 year ago

@Miraihi I have RX580 too, can you share all arguments you use to launch app?

Ones that do matter are --medvram --precision full --no-half --no-half-vae --opt-split-attention-v1 --opt-sub-quad-attention --disable-nan-check The maximum picture size you can generate is 800x600. When generating 800x600 I'm getting 2-3 seconds per iteration.

Sometimes it crushes when I try to do more than 100 iterations. Did you face it?

Miraihi commented 1 year ago

Sometimes it crushes when I try to do more than 100 iterations. Did you face it?

To be honest I never do more than 30 iterations. 100 seems to me extremely excessive, at least for the samplers I use (Eueler A and DPM++ 2M Karras). I recommend to rather look for a better prompts and more appropriate models than cranking up that many iterations.

sundevista commented 1 year ago

Sometimes it crushes when I try to do more than 100 iterations. Did you face it?

To be honest I never do more than 30 iterations. 100 seems to me extremely excessive, at least for the samplers I use (Eueler A and DPM++ 2M Karras).

Previously I was using Google Colab, so I set batch count to 4 and iteration to 50. How are you work with these settings? Maybe you're making single samples?

Miraihi commented 1 year ago

Previously I was using Google Colab, so I set batch count to 4 and iteration to 50. How are you work with these settings? Maybe you're making single samples?

Making really big batches is not possibe on DirectML at current state. Also the large number of iterations isn't required to get a decent picture (Go to civitai and look for examples.). So what I do currently is using ControlNet extension for a decent txt2image base and then inpainting until I'm satisfied.

kiedrim commented 6 months ago

Hello, maybe too late to this topic, but i had same issue with black screen outputs. Possible FIX: What helped me, was changing checkpoint to different one, and then back to the one I wanted to use. Now, everytime i get black output, i just quickly change checkpoint (model) from dropdown, Then it generate pictures correctly.

I'm using RX 6700 XT with following arguments:

--use-directml --medvram --precision full --no-half --no-half-vae --disable-nan-check --opt-split-attention-v1 --upcast-sampling