lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.86k stars 191 forks source link

[Bug]: Not enough memory - RuntimeError: Could not allocate tensor with 1610612736 bytes. #414

Open 108806 opened 8 months ago

108806 commented 8 months ago

Checklist

What happened?

After trying any of the upscalers, it crashes with an error: RuntimeError: Could not allocate tensor with 1610612736 bytes. There is not enough GPU video memory available!

Which is pretty weird, because 1610612736 is only something about 1.5 gb, and I have 16gb on my 7800xt.

Steps to reproduce the problem

just upscale 2x from 768x1024

What should have happened?

Nothing

What browsers do you use to access the UI ?

No response

Sysinfo

sysinfo-2024-03-11-18-00.json

Console logs

txt2img: mouses counqering the mars
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:41<00:00,  1.39s/it]
  0%|                                                                                           | 0/30 [00:04<?, ?it/s]
*** Error completing request
*** Arguments: ('task(p1a6jhxdf99d9ig)', <gradio.routes.Request object at 0x0000024642954A90>, 'mouses counqering the mars', '', [], 30, 'Euler a', 1, 1, 7, 1024, 768, True, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', ['Clip skip: 2'], 0, False, '', 0.8, 218732222, False, -1, 0, 0, 0, True, False, False, False, 'base', False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 'DemoFusion', True, 128, 64, 4, 2, False, 10, 1, 1, 64, False, True, 3, 1, 1, False, 512, 64, True, True, True, False, UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), False, '', 'gelbooru', True, 100, False, False, True, '', False, 0.75, False, "Don't Change", "Don't Change", True, '', False, 2, 'None', 'None', 0.5, 1, 100, 'Random', 'All', '', 1, -1, 1, False, '', False, False, False, False, '', '', False, False, 'Add Before', False, True, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
    Traceback (most recent call last):
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\modules\txt2img.py", line 110, in txt2img
        processed = processing.process_images(p)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\modules\processing.py", line 787, in process_images
        res = process_images_inner(p)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 59, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\modules\processing.py", line 1015, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\modules\processing.py", line 1367, in sample
        return self.sample_hr_pass(samples, decoded_samples, seeds, subseeds, subseed_strength, prompts)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\modules\processing.py", line 1452, in sample_hr_pass
        samples = self.sampler.sample_img2img(self, samples, noise, self.hr_c, self.hr_uc, steps=self.hr_second_pass_steps or self.steps, image_conditioning=image_conditioning)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 193, in sample_img2img
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\modules\sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 193, in <lambda>
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\modules\sd_samplers_cfg_denoiser.py", line 237, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 18, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 32, in __call__
        return self.__orig_func(*args, **kwargs)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1538, in _call_impl
        result = forward_call(*args, **kwargs)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\modules\sd_unet.py", line 91, in UNetModel_forward
        return original_forward(self, x, timesteps, context, *args, **kwargs)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
        h = module(h, emb, context)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
        x = layer(x, context)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
        x = block(x, context=context[i])
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
        return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
        return CheckpointFunction.apply(func, len(inputs), *args)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
        return super().apply(*args, **kwargs)  # type: ignore[misc]
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
        output_tensors = ctx.run_function(*ctx.input_tensors)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 272, in _forward
        x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\modules\sd_hijack_optimizations.py", line 426, in sub_quad_attention_forward
        x = sub_quad_attention(q, k, v, q_chunk_size=shared.cmd_opts.sub_quad_q_chunk_size, kv_chunk_size=shared.cmd_opts.sub_quad_kv_chunk_size, chunk_threshold=shared.cmd_opts.sub_quad_chunk_threshold, use_checkpoint=self.training)
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\modules\sd_hijack_optimizations.py", line 466, in sub_quad_attention
        return sub_quadratic_attention.efficient_dot_product_attention(
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\modules\sub_quadratic_attention.py", line 207, in efficient_dot_product_attention
        attn_scores = compute_query_chunk_attn(
      File "F:\_PROJECTS\AI\STABLE_DIFF\stable-diffusion-webui-directml\modules\sub_quadratic_attention.py", line 130, in _get_attention_scores_no_kv_chunking
        attn_probs = attn_scores.softmax(dim=-1)
    RuntimeError: Could not allocate tensor with 3221225472 bytes. There is not enough GPU video memory available

Additional information

This happens for all the models, and all the upscalers, and all the prompts.

lshqqytiger commented 8 months ago

As the error message says, your VRAM is almost full. So there's no available memory space to allocate n bytes. Consider reducing the resolution or switching to ZLUDA that has much better memory management.