lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.87k stars 191 forks source link

[Bug]: RuntimeError without more error messages #228

Open pravezhang opened 1 year ago

pravezhang commented 1 year ago

Is there an existing issue for this?

What happened?

After starting through the webui-user.bat script, open the webpage, enter the prompt, and click generate, the RuntimeError occurs in serval seconds. On the second attempt, the RuntimeError occurs faster than the first attempt and appeared at different statements. Detailed traceback is pasted below.

Steps to reproduce the problem

  1. start service
  2. input some prompt sentence and click generate
  3. error occurs

What should have happened?

not support amd mobile processors? gpu vram too low? system ram too low? unmatched dependencies?

Version or Commit where the problem happens

version: 1.5.1

What Python version are you running on ?

Python 3.10.x

What platforms do you use to access the UI ?

Windows

What device are you running WebUI on?

AMD iGPUs

Cross attention optimization

Automatic

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

No

List of extensions

No

Console logs

venv "D:\Codes\Py\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.5.1
Commit hash: b180d1df30125ed606f94a779536f2dfb8aca74a
Launching Web UI with arguments:
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Loading weights [ad2a33c361] from D:\Codes\Py\stable-diffusion-webui-directml\models\Stable-diffusion\v2-1_768-ema-pruned.ckpt
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 10.2s (launcher: 0.6s, import torch: 3.8s, import gradio: 1.1s, setup paths: 0.8s, other imports: 2.0s, load scripts: 1.1s, create ui: 0.5s, gradio launch: 0.1s).
Creating model from config: D:\Codes\Py\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\configs\stable-diffusion\v2-inference-v.yaml
LatentDiffusion: Running in v-prediction mode
DiffusionWrapper has 865.91 M params.
Applying attention optimization: InvokeAI... done.
Model loaded in 18.2s (load weights from disk: 5.0s, find config: 2.0s, create model: 0.7s, apply weights to model: 0.8s, apply half(): 1.1s, move model to device: 8.2s, calculate empty prompt: 0.3s).
  5%|████▏                                                                              | 1/20 [00:09<03:03,  9.66s/it]
*** Error completing request                                                                    | 0/20 [00:00<?, ?it/s]
*** Arguments: ('task(a4igjg8ut0s9ynz)', 'flowers with smily faces', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], <gradio.routes.Request object at 0x000001F031023640>, 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0) {}
    Traceback (most recent call last):
      File "D:\Codes\Py\stable-diffusion-webui-directml\modules\call_queue.py", line 58, in f
        res = list(func(*args, **kwargs))
      File "D:\Codes\Py\stable-diffusion-webui-directml\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "D:\Codes\Py\stable-diffusion-webui-directml\modules\txt2img.py", line 69, in txt2img
        processed = processing.process_images(p)
      File "D:\Codes\Py\stable-diffusion-webui-directml\modules\processing.py", line 680, in process_images
        res = process_images_inner(p)
      File "D:\Codes\Py\stable-diffusion-webui-directml\modules\processing.py", line 797, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "D:\Codes\Py\stable-diffusion-webui-directml\modules\processing.py", line 1057, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "D:\Codes\Py\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 464, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
      File "D:\Codes\Py\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 303, in launch_sampling
        return func()
      File "D:\Codes\Py\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 464, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
      File "D:\Codes\Py\stable-diffusion-webui-directml\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "D:\Codes\Py\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "D:\Codes\Py\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Codes\Py\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 183, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "D:\Codes\Py\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Codes\Py\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 167, in forward
        return self.get_v(input * c_in, self.sigma_to_t(sigma), **kwargs) * c_out + input * c_skip
      File "D:\Codes\Py\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 177, in get_v
        return self.inner_model.apply_model(x, t, cond)
      File "D:\Codes\Py\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "D:\Codes\Py\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "D:\Codes\Py\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "D:\Codes\Py\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Codes\Py\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "D:\Codes\Py\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Codes\Py\stable-diffusion-webui-directml\modules\sd_unet.py", line 91, in UNetModel_forward
        return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
      File "D:\Codes\Py\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 802, in forward
        h = module(h, emb, context)
      File "D:\Codes\Py\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Codes\Py\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
        x = layer(x, context)
      File "D:\Codes\Py\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Codes\Py\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 327, in forward
        x = self.norm(x)
      File "D:\Codes\Py\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Codes\Py\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward
        return F.group_norm(
      File "D:\Codes\Py\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\functional.py", line 2530, in group_norm
        return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
    RuntimeError

---
*** Error completing request
*** Arguments: ('task(caia0fmfx4f71b3)', 'flowers with smily faces', '', [], 20, 0, False, False, 1, 1, 7, 55.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], <gradio.routes.Request object at 0x000001F054792830>, 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0) {}
    Traceback (most recent call last):
      File "D:\Codes\Py\stable-diffusion-webui-directml\modules\call_queue.py", line 58, in f
        res = list(func(*args, **kwargs))
      File "D:\Codes\Py\stable-diffusion-webui-directml\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "D:\Codes\Py\stable-diffusion-webui-directml\modules\txt2img.py", line 69, in txt2img
        processed = processing.process_images(p)
      File "D:\Codes\Py\stable-diffusion-webui-directml\modules\processing.py", line 680, in process_images
        res = process_images_inner(p)
      File "D:\Codes\Py\stable-diffusion-webui-directml\modules\processing.py", line 786, in process_images_inner
        p.setup_conds()
      File "D:\Codes\Py\stable-diffusion-webui-directml\modules\processing.py", line 1194, in setup_conds
        super().setup_conds()
      File "D:\Codes\Py\stable-diffusion-webui-directml\modules\processing.py", line 364, in setup_conds
        self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, self.steps * self.step_multiplier, [self.cached_uc], self.extra_network_data)
      File "D:\Codes\Py\stable-diffusion-webui-directml\modules\processing.py", line 353, in get_conds_with_caching
        cache[1] = function(shared.sd_model, required_prompts, steps)
      File "D:\Codes\Py\stable-diffusion-webui-directml\modules\prompt_parser.py", line 163, in get_learned_conditioning
        conds = model.get_learned_conditioning(texts)
      File "D:\Codes\Py\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
        c = self.cond_stage_model(c)
      File "D:\Codes\Py\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Codes\Py\stable-diffusion-webui-directml\modules\sd_hijack_clip.py", line 234, in forward
        z = self.process_tokens(tokens, multipliers)
      File "D:\Codes\Py\stable-diffusion-webui-directml\modules\sd_hijack_clip.py", line 263, in process_tokens
        tokens = torch.asarray(remade_batch_tokens).to(devices.device)
    RuntimeError

---

Additional information

CPU: AMD 6800H RAM: 16GB GPU: iGPU OS: windows11 home Python version: 3.10.6

lshqqytiger commented 7 months ago

I think this is out-of-memory error.