AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
136.87k stars 26.05k forks source link

Exception and OOM #1971

Closed NO-ob closed 1 year ago

NO-ob commented 1 year ago

Describe the bug Exception and oom when trying to generate images, Not sure which commit has caused it but I restarted the webui earlier and it hasn't been working since, I'm using the same settings I was using yesterday and was not ooming then. I'm using an amd card

To Reproduce Steps to reproduce the behavior: Sampling method: DPM2, Euler A Sampling Steps: 35 Width: 896 Height: 1344 High res fix: 0.64 CFG: 14

Expected behavior Images to be generated

Desktop (please complete the following information):

Error completing request
Arguments: ('horns, solo,yukata', '(pubic_hair) (ugly:1.2), extra fingers, (mutated hands and fingers:1.3), (mutation:1.3), (poorly drawn hands), (poorly drawn face:1.2), (deformed face:1.3), (bad anatomy), (bad proportions:1.2), (censored:1.4), (out_of_frame:1.2), (multiple_views), (zipper), (censorship:1.2),  (mosaic:1/2), (signature), (copyright), (trademark), (watermark:1.3), jewelry, braids, pigtail,  (brooch), (SLEEVES:1.4), (ring),(big breasts), (adult woman), man, men, masculine, (tall:1.3)', 'None', 'None', 35, 4, False, False, 1, 1, 14, 3474768211.0, -1.0, 0, 0, 0, False, 1344, 896, True, False, 0.64, 0, False, False, None, '', 1, '', 4, '', True, False) {}
Traceback (most recent call last):
  File "/mnt/LoonixGames/stable-diffusion-webui/modules/ui.py", line 158, in f
    res = list(func(*args, **kwargs))
  File "/mnt/LoonixGames/stable-diffusion-webui/webui.py", line 66, in f
    res = func(*args, **kwargs)
  File "/mnt/LoonixGames/stable-diffusion-webui/modules/txt2img.py", line 43, in txt2img
    processed = process_images(p)
  File "/mnt/LoonixGames/stable-diffusion-webui/modules/processing.py", line 381, in process_images
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength)
  File "/mnt/LoonixGames/stable-diffusion-webui/modules/processing.py", line 555, in sample
    samples = self.sampler.sample_img2img(self, samples, noise, conditioning, unconditional_conditioning, steps=self.steps)
  File "/mnt/LoonixGames/stable-diffusion-webui/modules/sd_samplers.py", line 377, in sample_img2img
    return self.func(self.model_wrap_cfg, xi, sigma_sched, extra_args={'cond': conditioning, 'uncond': unconditional_conditioning, 'cond_scale': p.cfg_scale}, disable=False, callback=self.callback_state, **extra_params_kwargs)
  File "/mnt/LoonixGames/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/mnt/LoonixGames/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 132, in sample_dpm_2
    denoised = model(x, sigma_hat * s_in, **extra_args)
  File "/mnt/LoonixGames/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/mnt/LoonixGames/stable-diffusion-webui/modules/sd_samplers.py", line 252, in forward
    x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=tensor[a:b])
  File "/mnt/LoonixGames/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/mnt/LoonixGames/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "/mnt/LoonixGames/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "/mnt/LoonixGames/stable-diffusion-webui/repositories/stable-diffusion/ldm/models/diffusion/ddpm.py", line 987, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "/mnt/LoonixGames/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1148, in _call_impl
    result = forward_call(*input, **kwargs)
  File "/mnt/LoonixGames/stable-diffusion-webui/repositories/stable-diffusion/ldm/models/diffusion/ddpm.py", line 1410, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "/mnt/LoonixGames/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/mnt/LoonixGames/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/diffusionmodules/openaimodel.py", line 732, in forward
    h = module(h, emb, context)
  File "/mnt/LoonixGames/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/mnt/LoonixGames/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/diffusionmodules/openaimodel.py", line 85, in forward
    x = layer(x, context)
  File "/mnt/LoonixGames/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/mnt/LoonixGames/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/attention.py", line 258, in forward
    x = block(x, context=context)
  File "/mnt/LoonixGames/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/mnt/LoonixGames/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/attention.py", line 209, in forward
    return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
  File "/mnt/LoonixGames/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/diffusionmodules/util.py", line 114, in checkpoint
    return CheckpointFunction.apply(func, len(inputs), *args)
  File "/mnt/LoonixGames/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/diffusionmodules/util.py", line 127, in forward
    output_tensors = ctx.run_function(*ctx.input_tensors)
  File "/mnt/LoonixGames/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/attention.py", line 212, in _forward
    x = self.attn1(self.norm1(x)) + x
  File "/mnt/LoonixGames/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/mnt/LoonixGames/stable-diffusion-webui/modules/hypernetwork.py", line 75, in attention_CrossAttention_forward
    sim = einsum('b i d, b j d -> b i j', q, k) * self.scale
RuntimeError: HIP out of memory. Tried to allocate 10.55 GiB (GPU 0; 15.98 GiB total capacity; 13.94 GiB already allocated; 1.92 GiB free; 14.00 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_HIP_ALLOC_CONF
Kruk2 commented 1 year ago

Sam here. OOM on batch size of 6 images on 3090 24GB vram. Worked fine on: c9cc65b201679ea43c763b0d85e749d40bbc5433 Throws OOM on 27032c47df9c07ac21dd5b89fa7dc247bb8705b6 Installing and adding --xformers helps (and it's faster). Not sure if xformers affect quality of generated images. Works on latest commit cfc33f99d47d1f45af15499e5965834089d11858