lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.87k stars 192 forks source link

New updates decrease performance #106

Closed FrakerKill closed 1 year ago

FrakerKill commented 1 year ago

Is there an existing issue for this?

What happened?

After new updates, my RX6600 is working for 512x512 in 5-8s/it instead 1.5it/s And later you try to generate another one and can't allocate it by small space: 45MB

RuntimeError: Could not allocate tensor with 4588800 bytes. There is not enough GPU video memory available!

Steps to reproduce the problem

Generating 512x512 in ++2M Karras

What should have happened?

Better speed for this GPU

Commit where the problem happens

WEBUI

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Google Chrome, Microsoft Edge

Command Line Arguments

set COMMANDLINE_ARGS=--listen --autolaunch

List of extensions

image

Console logs

100%|██████████████████████████████████████████████████████████████| 50/50 [05:55<00:00,  7.12s/it]
Total progress: 120it [20:46, 10.39s/it]
  6%|███▊                                                           | 3/50 [00:31<08:09, 10.42s/it]
Error completing request                                            | 3/50 [00:18<05:23,  6.88s/it]
Arguments: ('task(j03jaikhheydwsa)', "(masterpiece, high quality, highres,Highest picture quality), (Master's work), (zentangle:1.2),classical,noble,princess,negative space,Tyndall Effect,light background,(white background:1.5),bare shoulders, barefoot, bare back, shiny skin, shiny hair, dress,nsfw, vibrant color,(1girl:1.3), mecha-kimono, (mechanical arm:1.2), black hair,\nZentangle, structured patterns, meditative drawing, intricate designs, focus and relaxation, creative doodling, artistic expression,Neon Light, light painting, long exposure, dynamic streaks,\nphoto manipulation, altered realities, fantastical scenes, digital artistry,\ncross-hatching, graphic linework, textural shading, inked lines, dynamic contrast, expressive style, captivating detail, chiaroscuro technique,", '(bad-artist:1.0), (bad-artist-anime:1.0), (bad_prompt_version2:0.8), (bad-hands-5:1.0), (badhandv4:1.0), (worst quality:2), (low quality:2), (normal quality:2), (monochrome:1.2), (grayscale:1.2), (EasyNegative:1.0),', [], 50, 15, False, False, 1, 1, 9, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.5, 1.25, 'Latent', 20, 0, 0, [], 0, <scripts.external_code.ControlNetUnit object at 0x000002CB9EC86080>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, False, 50) {}
Traceback (most recent call last):
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\processing.py", line 515, in process_images
    res = process_images_inner(p)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\processing.py", line 669, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\processing.py", line 887, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 377, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 251, in launch_sampling
    return func()
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 377, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\sampling.py", line 602, in sample_dpmpp_2m
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 154, in forward
    x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 802, in forward
    h = module(h, emb, context)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
    x = layer(x, context)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
    x = block(x, context=context[i])
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
    return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
    return CheckpointFunction.apply(func, len(inputs), *args)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
    return super().apply(*args, **kwargs)  # type: ignore[misc]
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
    output_tensors = ctx.run_function(*ctx.input_tensors)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 272, in _forward
    x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\sd_hijack_optimizations.py", line 245, in split_cross_attention_forward_invokeAI
    r = einsum_op(q, k, v)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\sd_hijack_optimizations.py", line 220, in einsum_op
    return einsum_op_dml(q, k, v)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\sd_hijack_optimizations.py", line 208, in einsum_op_dml
    return einsum_op_tensor_mem(q, k, v, (mem_reserved - mem_active) if mem_reserved > mem_active else 1)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\sd_hijack_optimizations.py", line 193, in einsum_op_tensor_mem
    return einsum_op_slice_1(q, k, v, max(q.shape[1] // div, 1))
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\sd_hijack_optimizations.py", line 168, in einsum_op_slice_1
    r[:, i:end] = einsum_op_compvis(q[:, i:end], k, v)
RuntimeError: Could not allocate tensor with 4588800 bytes. There is not enough GPU video memory available!

  0%|                                                                       | 0/50 [00:03<?, ?it/s]
Error completing request
Arguments: ('task(a4stoe4h50j4pug)', "(masterpiece, high quality, highres,Highest picture quality), (Master's work), (zentangle:1.2),classical,noble,princess,negative space,Tyndall Effect,light background,(white background:1.5),bare shoulders, barefoot, bare back, shiny skin, shiny hair, dress,nsfw, vibrant color,(1girl:1.3), mecha-kimono, (mechanical arm:1.2), black hair,\nZentangle, structured patterns, meditative drawing, intricate designs, focus and relaxation, creative doodling, artistic expression,Neon Light, light painting, long exposure, dynamic streaks,\nphoto manipulation, altered realities, fantastical scenes, digital artistry,\ncross-hatching, graphic linework, textural shading, inked lines, dynamic contrast, expressive style, captivating detail, chiaroscuro technique,", '(bad-artist:1.0), (bad-artist-anime:1.0), (bad_prompt_version2:0.8), (bad-hands-5:1.0), (badhandv4:1.0), (worst quality:2), (low quality:2), (normal quality:2), (monochrome:1.2), (grayscale:1.2), (EasyNegative:1.0),', [], 50, 15, False, False, 1, 1, 9, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.5, 1.25, 'Latent', 20, 0, 0, [], 0, <scripts.external_code.ControlNetUnit object at 0x000002CB9EC86080>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, False, 50) {}
Traceback (most recent call last):
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\processing.py", line 515, in process_images
    res = process_images_inner(p)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\processing.py", line 669, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\processing.py", line 887, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 377, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 251, in launch_sampling
    return func()
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 377, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\sampling.py", line 602, in sample_dpmpp_2m
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 154, in forward
    x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 802, in forward
    h = module(h, emb, context)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
    x = layer(x, context)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
    x = block(x, context=context[i])
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
    return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
    return CheckpointFunction.apply(func, len(inputs), *args)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
    return super().apply(*args, **kwargs)  # type: ignore[misc]
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
    output_tensors = ctx.run_function(*ctx.input_tensors)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 272, in _forward
    x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\sd_hijack_optimizations.py", line 245, in split_cross_attention_forward_invokeAI
    r = einsum_op(q, k, v)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\sd_hijack_optimizations.py", line 220, in einsum_op
    return einsum_op_dml(q, k, v)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\sd_hijack_optimizations.py", line 208, in einsum_op_dml
    return einsum_op_tensor_mem(q, k, v, (mem_reserved - mem_active) if mem_reserved > mem_active else 1)
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\sd_hijack_optimizations.py", line 193, in einsum_op_tensor_mem
    return einsum_op_slice_1(q, k, v, max(q.shape[1] // div, 1))
  File "D:\Datos\Downloads\GitHub\stable-diffusion-webui-directml\modules\sd_hijack_optimizations.py", line 168, in einsum_op_slice_1
    r[:, i:end] = einsum_op_compvis(q[:, i:end], k, v)
RuntimeError: Could not allocate tensor with 4588800 bytes. There is not enough GPU video memory available!

Additional information

No response

lshqqytiger commented 1 year ago

I think you are using different commandline arguments compared to before. Add --opt-sub-quad-attention and try again.

FrakerKill commented 1 year ago

I tried it too but after last updates:

image

Nikitaefimov commented 1 year ago

Same here but on 5300m. 3.5s/it to 6.5s/it. tried different optimizations, but result is the same. Also tried clean install. Only speed issue, no video memmory error

lshqqytiger commented 1 year ago

Can't reproduce. It generates 768x512 with 1.2s/it(first generation) ~ 1.07it/s for me. (RX 5700 XT) Do you have more than two GPUs? 2023-05-05 012253

FrakerKill commented 1 year ago

which commands args have you configured? And with 2.0.0 torch?

asthomas commented 1 year ago

I'm using an RX6600. This is a completely clean install, nothing added, all default settings. Adding --opt-sub-quad-attention makes no difference.

I get the memory allocation error with 768x512, but not with 512x512.

My average iteration time is 5.4 seconds. I can get 7.5 seconds with CPU-only.

JJGall commented 1 year ago

It was slow for my 6800 until I set these arguments

--precision full --no-half --autolaunch --opt-sub-quad-attention --disable-nan-check --no-half-vae --opt-sdp-attention --opt-split-attention

It is usually slow on first generation but after that I get about 2it/s for 512x512. Not exactly sure what arguments are doing what or ones I don't need but it works for me.

qwerkilo commented 1 year ago

use --opt-sdp-attention, its the same performance as "--opt-sub-quad-attention" and will not generate black images.

lshqqytiger commented 1 year ago

which commands args have you configured? And with 2.0.0 torch?

torch 2.0.0 torch-directml 0.2.0.dev230426 --no-half --precision full --opt-sub-quad-attention

FrakerKill commented 1 year ago

It was slow for my 6800 until I set these arguments

--precision full --no-half --autolaunch --opt-sub-quad-attention --disable-nan-check --no-half-vae --opt-sdp-attention --opt-split-attention

It is usually slow on first generation but after that I get about 2it/s for 512x512. Not exactly sure what arguments are doing what or ones I don't need but it works for me.

Perfect, with --opt-sdp-attention image

FrakerKill commented 1 year ago

But still a lot of allocating problems (this is 512x512):

image

FrakerKill commented 1 year ago

We can continue this in #38