AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
142.6k stars 26.9k forks source link

[Bug]: Certain specific resolutions triggering CUDA out of memory #11102

Open velourlawsuits opened 1 year ago

velourlawsuits commented 1 year ago

Is there an existing issue for this?

What happened?

I have been able to generate images up to 1900x1900 on an RTX 4080 run locally. I've been running experiments at larger sizes for printing purposes generally at 8:10 for framing. Something I've just noticed recently is that I can generate images that are 1632x2048 or 1648x2048 pixels, however if I try to generate at a proper 8x10 ratio of 1632x2040 I trigger the CUDA out of memory error (even though it is a smaller resolution). Since the time I was first generating 1900x1900 and 1632x2040 images, I have downloaded many extensions and had to edit the requirements_versions.txt in order to get Dreambooth working and I'm wondering if this might have something to do with it. Has anyone noticed anything similar? I realize most people don't bother with larger resolutions because it mostly spits out garbage at that size, but you'd be surprised at what is possible if you're patient.

Steps to reproduce the problem

Try generating 1632x2048 and 1632x2040 and see what happens.

What should have happened?

CUDA should not be out of memory as I'm generating larger images perfectly fine. IMG_2089

Commit where the problem happens

A1111 version 1.3.1 and SD version 2.1-768

What Python version are you running on ?

Python 3.10.x

What platforms do you use to access the UI ?

Windows

What device are you running WebUI on?

Nvidia GPUs (RTX 20 above)

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

No but I edited the requirements_versions.txt:

blendmodes==2022
transformers==4.25.1
accelerate==0.18.0
basicsr==1.4.2
gfpgan==1.3.8
gradio==3.31.0
numpy==1.23.5
Pillow==9.5.0
realesrgan==0.3.0
torch
omegaconf==2.2.3
pytorch_lightning==1.9.4
scikit-image==0.20.0
timm==0.6.7
piexif==1.1.3
einops==0.4.1
jsonmerge==1.8.0
clean-fid==0.1.35
resize-right==0.0.2
torchdiffeq==0.2.3
kornia==0.6.7
lark==1.1.2
inflection==0.5.1
GitPython==3.1.30
torchsde==0.2.5
safetensors==0.3.1
httpcore<=0.15
fastapi==0.94.0
tomesd==0.1.2

List of extensions

Deforum, txt2video, Dreambooth, Prompt Generator, Wildcards Manager

Console logs

Arguments: ('task(9wsubu1qpoq7kdg)', 'a __trev/color/type__ __jumbo/medium/photography/filmtypes__ film of __trev/locations/destinations__ by __artists/Photography/by_country/canada__', 'text, watermark, soft, out of focus, blurry', [], 60, 4, False, False, 100, 1, 5.5, -1.0, -1.0, 0, 0, 0, False, 2040, 1632, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], 0, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002A5A2739AB0>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, None, False, 50) {}
Traceback (most recent call last):
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\txt2img.py", line 57, in txt2img
    processed = processing.process_images(p)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\processing.py", line 610, in process_images
    res = process_images_inner(p)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
    return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\processing.py", line 728, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\processing.py", line 976, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\sd_samplers_kdiffusion.py", line 383, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\sd_samplers_kdiffusion.py", line 257, in launch_sampling
    return func()
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\sd_samplers_kdiffusion.py", line 383, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\k-diffusion\k_diffusion\sampling.py", line 198, in sample_dpm_2
    denoised = model(x, sigma_hat * s_in, **extra_args)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\sd_samplers_kdiffusion.py", line 137, in forward
    x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict([cond_in], image_cond_in))
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\k-diffusion\k_diffusion\external.py", line 167, in forward
    return self.get_v(input * c_in, self.sigma_to_t(sigma), **kwargs) * c_out + input * c_skip
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\k-diffusion\k_diffusion\external.py", line 177, in get_v
    return self.inner_model.apply_model(x, t, cond)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
    h = module(h, emb, context)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
    x = layer(x, context)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
    x = block(x, context=context[i])
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
    return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
    return CheckpointFunction.apply(func, len(inputs), *args)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
    return super().apply(*args, **kwargs)  # type: ignore[misc]
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
    output_tensors = ctx.run_function(*ctx.input_tensors)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 272, in _forward
    x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\sd_hijack_optimizations.py", line 247, in split_cross_attention_forward
    s1 = einsum('b i d, b j d -> b i j', q[:, i:end], k)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\functional.py", line 378, in einsum
    return _VF.einsum(equation, operands)  # type: ignore[attr-defined]
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 100.81 GiB (GPU 0; 15.99 GiB total capacity; 3.28 GiB already allocated; 10.13 GiB free; 3.49 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

INFO:sd_dynamic_prompts.dynamic_prompting:Prompt matrix will create 100 images in a total of 100 batches.
  0%|                                                                                           | 0/60 [00:03<?, ?it/s]
Error completing request
Arguments: ('task(m2jbdkx91dn2fi7)', 'a __trev/color/type__ __jumbo/medium/photography/filmtypes__ film of __trev/locations/destinations__ by __artists/Photography/by_country/canada__', 'text, watermark, soft, out of focus, blurry', [], 60, 4, False, False, 100, 1, 5.5, -1.0, -1.0, 0, 0, 0, False, 2040, 1632, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], 0, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002A852757730>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, None, False, 50) {}
Traceback (most recent call last):
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\txt2img.py", line 57, in txt2img
    processed = processing.process_images(p)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\processing.py", line 610, in process_images
    res = process_images_inner(p)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
    return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\processing.py", line 728, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\processing.py", line 976, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\sd_samplers_kdiffusion.py", line 383, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\sd_samplers_kdiffusion.py", line 257, in launch_sampling
    return func()
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\sd_samplers_kdiffusion.py", line 383, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\k-diffusion\k_diffusion\sampling.py", line 198, in sample_dpm_2
    denoised = model(x, sigma_hat * s_in, **extra_args)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\sd_samplers_kdiffusion.py", line 137, in forward
    x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict([cond_in], image_cond_in))
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\k-diffusion\k_diffusion\external.py", line 167, in forward
    return self.get_v(input * c_in, self.sigma_to_t(sigma), **kwargs) * c_out + input * c_skip
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\k-diffusion\k_diffusion\external.py", line 177, in get_v
    return self.inner_model.apply_model(x, t, cond)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
    h = module(h, emb, context)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
    x = layer(x, context)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
    x = block(x, context=context[i])
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
    return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
    return CheckpointFunction.apply(func, len(inputs), *args)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
    return super().apply(*args, **kwargs)  # type: ignore[misc]
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
    output_tensors = ctx.run_function(*ctx.input_tensors)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 272, in _forward
    x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\sd_hijack_optimizations.py", line 247, in split_cross_attention_forward
    s1 = einsum('b i d, b j d -> b i j', q[:, i:end], k)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\functional.py", line 378, in einsum
    return _VF.einsum(equation, operands)  # type: ignore[attr-defined]
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 50.41 GiB (GPU 0; 15.99 GiB total capacity; 3.03 GiB already allocated; 10.38 GiB free; 3.25 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

INFO:sd_dynamic_prompts.dynamic_prompting:Prompt matrix will create 100 images in a total of 100 batches.
  0%|                                                                                           | 0/60 [00:01<?, ?it/s]
Error completing request
Arguments: ('task(m7c3yglnauks31s)', 'a __trev/color/type__ __jumbo/medium/photography/filmtypes__ film of __trev/locations/destinations__ by __artists/Photography/by_country/canada__', 'text, watermark, soft, out of focus, blurry', [], 60, 4, False, False, 100, 1, 5.5, -1.0, -1.0, 0, 0, 0, False, 2040, 1632, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], 0, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002A852755ED0>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, None, False, 50) {}
Traceback (most recent call last):
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\txt2img.py", line 57, in txt2img
    processed = processing.process_images(p)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\processing.py", line 610, in process_images
    res = process_images_inner(p)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
    return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\processing.py", line 728, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\processing.py", line 976, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\sd_samplers_kdiffusion.py", line 383, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\sd_samplers_kdiffusion.py", line 257, in launch_sampling
    return func()
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\sd_samplers_kdiffusion.py", line 383, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\k-diffusion\k_diffusion\sampling.py", line 198, in sample_dpm_2
    denoised = model(x, sigma_hat * s_in, **extra_args)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\sd_samplers_kdiffusion.py", line 137, in forward
    x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict([cond_in], image_cond_in))
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\k-diffusion\k_diffusion\external.py", line 167, in forward
    return self.get_v(input * c_in, self.sigma_to_t(sigma), **kwargs) * c_out + input * c_skip
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\k-diffusion\k_diffusion\external.py", line 177, in get_v
    return self.inner_model.apply_model(x, t, cond)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
    h = module(h, emb, context)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
    x = layer(x, context)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
    x = block(x, context=context[i])
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
    return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
    return CheckpointFunction.apply(func, len(inputs), *args)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
    return super().apply(*args, **kwargs)  # type: ignore[misc]
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
    output_tensors = ctx.run_function(*ctx.input_tensors)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 272, in _forward
    x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\modules\sd_hijack_optimizations.py", line 247, in split_cross_attention_forward
    s1 = einsum('b i d, b j d -> b i j', q[:, i:end], k)
  File "D:\Stable Diffusion\stable-diffusion-webui-1.3.1\venv\lib\site-packages\torch\functional.py", line 378, in einsum
    return _VF.einsum(equation, operands)  # type: ignore[attr-defined]
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 50.41 GiB (GPU 0; 15.99 GiB total capacity; 3.03 GiB already allocated; 10.38 GiB free; 3.25 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Additional information

No response

akx commented 1 year ago

Probably dupe of #10887, #10403.

velourlawsuits commented 1 year ago

Probably dupe of #10887, #10403.

Thank you, choosing resolutions with multiples of 8 works for me. Having said that I feel like my rendering speed has been cut in half since I discovered this bug. I'm keeping this open in case anyone has a more concrete fix. Last week I was able to generate at any resolution and to my knowledge the only things I changed since were adding various plugins and changing the requirement_versions.txt. I've done a hard reset and I still get the error 🤷‍♂️

velourlawsuits commented 1 year ago

Correction - multiples of 8 doesn't work for every multiple.