AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
139.99k stars 26.52k forks source link

[Bug]: VRAM OUT OF MEMORY started with update #10005

Open CrisisBomberman opened 1 year ago

CrisisBomberman commented 1 year ago

Is there an existing issue for this?

What happened?

before update i had no problem with getting 1720x1540 images with hires but now i cant even get 512x512 to 2x hires 1024x1024 it just goes out of ram and done. OutOfMemoryError: CUDA out of memory. Tried to allocate 72.00 MiB (GPU 0; 11.00 GiB total capacity; 10.15 GiB already allocated; 0 bytes free; 10.30 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF ive 11gb vram and having this issue. also this error pops image image

Steps to reproduce the problem

  1. Go to .... generating an old prompt
  2. Press ....
  3. ...

What should have happened?

genereting what i was able to generete before with hires. now i can do high resolution txt to images

Commit where the problem happens

https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/72cd27a13587c9579942577e9e3880778be195f6

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

--autolaunch --ckpt-dir D:\StableDiffusion\stable-diffusion-webui\models\Stable-diffusion --vae-path D:\StableDiffusion\stable-diffusion-webui\models\VAE\vae-ft-mse-840000-ema-pruned.ckpt --no-half --always-batch-cond-uncond --deepdanbooru --theme dark --autolaunch --opt-split-attention

List of extensions

image

Console logs

locon load lora method0:00, ?it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:20<00:00,  1.01s/it]
Error completing request█████████████████████████████████████████                      | 20/30 [01:16<00:07,  1.30it/s]
Arguments: ('task(totiuc6onpefe4l)', '1girl,(solo,focus:1.1),\nTatsumaki,\ngreen/short hair+bangs,\ngreen eyes,\nmakeup,natural lipstick,\n(huge breasts:1.4),puffy nipples, narrow waist,\n(wide hips:1.2),sexy,(fit:0.9),\n(thick thighs:1.2),wet,\n(wet skin:1.2), water particules,\n(shiny skin:1.1), curvy,\nblack bikini, highleg panties, choker,\nthighhighs,perfect female anatomy,\nview from below, + dutch angle shot,\nprominent female lines,dynamic pose ,leg lift,bare legs,legs spread,lift,vaginal<lora:opm_tatsumaki-20:0.75> ,<lora:ShinyOiledSkin_v20-LyCORIS:0.6>,tsundere,voluptuous,1boy,sex,huge penis,\nBREAK\nshipyard,pirateships,ships,sunset,before night,blue pink sky,(blurry background:0.7),\nmasterpiece,highquality,intricate details,\ninsane face details,vivid colors,vibrant,semi realistic lighting,8K UHD,bloom,\nreflections,depth of field corneo_anal', 'Negative prompt: bad-artist bad-hands-5 bad-image-v2-39000 bad-picture-chill-75v badhandv4 bad_prompt_version2 By bad artist -neg ng_deepnegative_v1_75t easynegative verybadimagenegative_v1.3,(worst quality,low quality,low res:1.3),legwear', [], 20, 16, False, False, 1, 1, 7, 3545968875.0, -1.0, 0, 0, 0, False, 512, 512, True, 0.4, 3, '4x_NMKD-Siax_200k', 10, 0, 0, [], 0, False, '', 0, False, False, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, None, 'Refresh models', <controlnet.py.UiControlNetUnit object at 0x000001ABFEE6D8D0>, <controlnet.py.UiControlNetUnit object at 0x000001ABFF52D210>, False, '', 0.5, True, False, '', 'Lerp', False, False, False, 'Horizontal', '1,1', '0.2', False, False, False, 'Attention', False, '0', '0', False, False, 'positive', 'comma', 0, False, False, '', '', 0.0, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, '', 'None', 30, 4, 0, 0, False, 'None', '<br>', 'None', 30, 4, 0, 0, 4, 0.4, True, 32, 7, '', '', None, False, None, False, 50) {}
Traceback (most recent call last):
  File "D:\StableDiffusion\stable-diffusion-webui\modules\call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "D:\StableDiffusion\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "D:\StableDiffusion\stable-diffusion-webui\modules\processing.py", line 515, in process_images
    res = process_images_inner(p)
  File "D:\StableDiffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
    return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\modules\processing.py", line 669, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "D:\StableDiffusion\stable-diffusion-webui\modules\processing.py", line 942, in sample
    samples = self.sd_model.get_first_stage_encoding(self.sd_model.encode_first_stage(decoded_samples))
  File "D:\StableDiffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "D:\StableDiffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 830, in encode_first_stage
    return self.first_stage_model.encode(x)
  File "D:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 83, in encode
    h = self.encoder(x)
  File "D:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 536, in forward
    h = self.mid.attn_1(h)
  File "D:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\modules\sd_hijack_optimizations.py", line 414, in cross_attention_attnblock_forward
    h_ = torch.zeros_like(k, device=q.device)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 72.00 MiB (GPU 0; 11.00 GiB total capacity; 10.08 GiB already allocated; 0 bytes free; 10.28 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:14<00:00,  1.35it/s]
  0%|                                                                                           | 0/10 [00:01<?, ?it/s]
Error completing request
Arguments: ('task(khjymsab6qs9yw3)', '1girl,(solo,focus:1.1),\nTatsumaki,\ngreen/short hair+bangs,\ngreen eyes,\nmakeup,natural lipstick,\n(huge breasts:1.4),puffy nipples, narrow waist,\n(wide hips:1.2),sexy,(fit:0.9),\n(thick thighs:1.2),wet,\n(wet skin:1.2), water particules,\n(shiny skin:1.1), curvy,\nblack bikini, highleg panties, choker,\nthighhighs,perfect female anatomy,\nview from below, + dutch angle shot,\nprominent female lines,dynamic pose ,leg lift,bare legs,legs spread,lift,vaginal<lora:opm_tatsumaki-20:0.75> ,<lora:ShinyOiledSkin_v20-LyCORIS:0.6>,tsundere,voluptuous,1boy,sex,huge penis,\nBREAK\nshipyard,pirateships,ships,sunset,before night,blue pink sky,(blurry background:0.7),\nmasterpiece,highquality,intricate details,\ninsane face details,vivid colors,vibrant,semi realistic lighting,8K UHD,bloom,\nreflections,depth of field corneo_anal', 'Negative prompt: bad-artist bad-hands-5 bad-image-v2-39000 bad-picture-chill-75v badhandv4 bad_prompt_version2 By bad artist -neg ng_deepnegative_v1_75t easynegative verybadimagenegative_v1.3,(worst quality,low quality,low res:1.3),legwear', [], 20, 16, False, False, 1, 1, 7, 3545968875.0, -1.0, 0, 0, 0, False, 512, 512, True, 0.4, 2.7, '4x_NMKD-Siax_200k', 10, 0, 0, [], 0, False, '', 0, False, False, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, None, 'Refresh models', <controlnet.py.UiControlNetUnit object at 0x000001ABFF07FE50>, <controlnet.py.UiControlNetUnit object at 0x000001ABFE839AE0>, False, '', 0.5, True, False, '', 'Lerp', False, False, False, 'Horizontal', '1,1', '0.2', False, False, False, 'Attention', False, '0', '0', False, False, 'positive', 'comma', 0, False, False, '', '', 0.0, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, '', 'None', 30, 4, 0, 0, False, 'None', '<br>', 'None', 30, 4, 0, 0, 4, 0.4, True, 32, 7, '', '', None, False, None, False, 50) {}
Traceback (most recent call last):
  File "D:\StableDiffusion\stable-diffusion-webui\modules\call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "D:\StableDiffusion\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "D:\StableDiffusion\stable-diffusion-webui\modules\processing.py", line 515, in process_images
    res = process_images_inner(p)
  File "D:\StableDiffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
    return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\modules\processing.py", line 669, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "D:\StableDiffusion\stable-diffusion-webui\modules\processing.py", line 961, in sample
    samples = self.sampler.sample_img2img(self, samples, noise, conditioning, unconditional_conditioning, steps=self.hr_second_pass_steps or self.steps, image_conditioning=image_conditioning)
  File "D:\StableDiffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 350, in sample_img2img
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "D:\StableDiffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 251, in launch_sampling
    return func()
  File "D:\StableDiffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 350, in <lambda>
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "D:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 553, in sample_dpmpp_sde
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "D:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 154, in forward
    x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
  File "D:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "D:\StableDiffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "D:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "D:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
    h = module(h, emb, context)
  File "D:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
    x = layer(x, context)
  File "D:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
    x = block(x, context=context[i])
  File "D:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
    return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
  File "D:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
    return CheckpointFunction.apply(func, len(inputs), *args)
  File "D:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply    return super().apply(*args, **kwargs)  # type: ignore[misc]
  File "D:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
    output_tensors = ctx.run_function(*ctx.input_tensors)
  File "d:\stablediffusion\stable-diffusion-webui\venv\scripts\tomesd\tomesd\patch.py", line 64, in _forward
    x = u_a(self.attn1(m_a(self.norm1(x)), context=context if self.disable_self_attn else None)) + x
  File "D:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\modules\sd_hijack_optimizations.py", line 127, in split_cross_attention_forward
    s1 = einsum('b i d, b j d -> b i j', q[:, i:end], k)
  File "D:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\functional.py", line 378, in einsum
    return _VF.einsum(equation, operands)  # type: ignore[attr-defined]
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 12.78 GiB (GPU 0; 11.00 GiB total capacity; 4.28 GiB already allocated; 4.38 GiB free; 4.99 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:14<00:00,  1.36it/s]
  0%|                                                                                           | 0/10 [00:00<?, ?it/s]
Error completing request
Arguments: ('task(i95vaenrwo4ovpt)', '1girl,(solo,focus:1.1),\nTatsumaki,\ngreen/short hair+bangs,\ngreen eyes,\nmakeup,natural lipstick,\n(huge breasts:1.4),puffy nipples, narrow waist,\n(wide hips:1.2),sexy,(fit:0.9),\n(thick thighs:1.2),wet,\n(wet skin:1.2), water particules,\n(shiny skin:1.1), curvy,\nblack bikini, highleg panties, choker,\nthighhighs,perfect female anatomy,\nview from below, + dutch angle shot,\nprominent female lines,dynamic pose ,leg lift,bare legs,legs spread,lift,vaginal<lora:opm_tatsumaki-20:0.75> ,<lora:ShinyOiledSkin_v20-LyCORIS:0.6>,tsundere,voluptuous,1boy,sex,huge penis,\nBREAK\nshipyard,pirateships,ships,sunset,before night,blue pink sky,(blurry background:0.7),\nmasterpiece,highquality,intricate details,\ninsane face details,vivid colors,vibrant,semi realistic lighting,8K UHD,bloom,\nreflections,depth of field corneo_anal', 'Negative prompt: bad-artist bad-hands-5 bad-image-v2-39000 bad-picture-chill-75v badhandv4 bad_prompt_version2 By bad artist -neg ng_deepnegative_v1_75t easynegative verybadimagenegative_v1.3,(worst quality,low quality,low res:1.3),legwear', [], 20, 16, False, False, 1, 1, 7, 3545968875.0, -1.0, 0, 0, 0, False, 512, 512, True, 0.4, 2, '4x_NMKD-Siax_200k', 10, 0, 0, [], 0, False, '', 0, False, False, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, None, 'Refresh models', <controlnet.py.UiControlNetUnit object at 0x000001AC15387C10>, <controlnet.py.UiControlNetUnit object at 0x000001ABFF35A8F0>, False, '', 0.5, True, False, '', 'Lerp', False, False, False, 'Horizontal', '1,1', '0.2', False, False, False, 'Attention', False, '0', '0', False, False, 'positive', 'comma', 0, False, False, '', '', 0.0, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, '', 'None', 30, 4, 0, 0, False, 'None', '<br>', 'None', 30, 4, 0, 0, 4, 0.4, True, 32, 7, '', '', None, False, None, False, 50) {}
Traceback (most recent call last):
  File "D:\StableDiffusion\stable-diffusion-webui\modules\call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "D:\StableDiffusion\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "D:\StableDiffusion\stable-diffusion-webui\modules\processing.py", line 515, in process_images
    res = process_images_inner(p)
  File "D:\StableDiffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
    return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\modules\processing.py", line 669, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "D:\StableDiffusion\stable-diffusion-webui\modules\processing.py", line 961, in sample
    samples = self.sampler.sample_img2img(self, samples, noise, conditioning, unconditional_conditioning, steps=self.hr_second_pass_steps or self.steps, image_conditioning=image_conditioning)
  File "D:\StableDiffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 350, in sample_img2img
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "D:\StableDiffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 251, in launch_sampling
    return func()
  File "D:\StableDiffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 350, in <lambda>
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "D:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 553, in sample_dpmpp_sde
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "D:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 154, in forward
    x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
  File "D:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "D:\StableDiffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "D:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "D:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
    h = module(h, emb, context)
  File "D:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
    x = layer(x, context)
  File "D:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
    x = block(x, context=context[i])
  File "D:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
    return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
  File "D:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
    return CheckpointFunction.apply(func, len(inputs), *args)
  File "D:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply    return super().apply(*args, **kwargs)  # type: ignore[misc]
  File "D:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
    output_tensors = ctx.run_function(*ctx.input_tensors)
  File "d:\stablediffusion\stable-diffusion-webui\venv\scripts\tomesd\tomesd\patch.py", line 64, in _forward
    x = u_a(self.attn1(m_a(self.norm1(x)), context=context if self.disable_self_attn else None)) + x
  File "D:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui\modules\sd_hijack_optimizations.py", line 129, in split_cross_attention_forward
    s2 = s1.softmax(dim=-1, dtype=q.dtype)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.92 GiB (GPU 0; 11.00 GiB total capacity; 8.08 GiB already allocated; 1.07 GiB free; 8.30 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Additional information

No response

ponchojohn1234 commented 1 year ago

had a similar issue i mentioned in #9983 deleting /venv/ and letting it reinstall seems to have fixed the issue so i'd suggest trying that too

rostalsan commented 1 year ago

you have even not use a --xformers, of course it lead to a OOM.

CrisisBomberman commented 1 year ago

you have even not use a --xformers, of course it lead to a OOM.

--opt-split-attention is better option right now,

Sakura-Luna commented 1 year ago

Using no-half will consume a lot of VRAM, what GPU are you using?

pranshuthegamer commented 1 year ago

had a similar issue i mentioned in #9983 deleting /venv/ and letting it reinstall seems to have fixed the issue so i'd suggest trying that too

venv no longer exists for me

alenknight commented 1 year ago

i found that if i use --xformers ... i can go beyond the 2048x2048 mark.... but then i have other issues, and it's non-deterministic. if i want pixel perfect renders every time... i have to use the other optimizations... .limiting to 2k (on a 3090 card)