AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
141.57k stars 26.75k forks source link

[Bug]: Error with `Lora/Networks: use old method` enabled in Ver. 1.5 or later #12157

Closed hako-mikan closed 1 year ago

hako-mikan commented 1 year ago

Is there an existing issue for this?

What happened?

Error occurs in generation with LoRA when Lora/Networks: use old method that takes longer when you have multiple Loras active and produces same results as kohya-ss/sd-webui-additional-networks extension is checked.

Steps to reproduce the problem

  1. input LoRA in prompt
  2. Check Lora/Networks: use old method that takes longer when you have multiple Loras active and produces same results as kohya-ss/sd-webui-additional-networks extension in settings
  3. Generate Image

What should have happened?

When using an extension that requires changing the strength of LoRA at each step, it is necessary to enable this option; otherwise, LoRA and Model reloading will be performed every step, resulting in significantly slower generation times

Version or Commit where the problem happens

version: v1.5.1  •  python: 3.10.6  •  torch: 2.0.1+cu118  •  xformers: N/A  •  gradio: 3.32.0  •  checkpoint: cc6cb27103

What Python version are you running on ?

Python 3.10.x

What platforms do you use to access the UI ?

Windows

What device are you running WebUI on?

Nvidia GPUs (RTX 20 above)

Cross attention optimization

sdp

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

--opt-sdp-attention

List of extensions

No

Console logs

*** Error completing request
*** Arguments: ('task(s0z8x6y6lqrpu56)', 'a girl <lora:add_detail:1>', '(worst quality,low quality:1.4)bad anatomy', [], 20, 16, False, False, 1, 1, 7.5, 2124684864.0, -1.0, 0, 0, 0, False, 768, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], <gradio.routes.Request object at 0x000001BDFD059EA0>, 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0) {}
    Traceback (most recent call last):
      File "G:\AI\stable-diffusion-webui\modules\call_queue.py", line 58, in f
        res = list(func(*args, **kwargs))
      File "G:\AI\stable-diffusion-webui\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "G:\AI\stable-diffusion-webui\modules\txt2img.py", line 62, in txt2img
        processed = processing.process_images(p)
      File "G:\AI\stable-diffusion-webui\modules\processing.py", line 677, in process_images
        res = process_images_inner(p)
      File "G:\AI\stable-diffusion-webui\modules\processing.py", line 783, in process_images_inner
        p.setup_conds()
      File "G:\AI\stable-diffusion-webui\modules\processing.py", line 1191, in setup_conds
        super().setup_conds()
      File "G:\AI\stable-diffusion-webui\modules\processing.py", line 364, in setup_conds
        self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, self.steps * self.step_multiplier, [self.cached_uc], self.extra_network_data)
      File "G:\AI\stable-diffusion-webui\modules\processing.py", line 353, in get_conds_with_caching
        cache[1] = function(shared.sd_model, required_prompts, steps)
      File "G:\AI\stable-diffusion-webui\modules\prompt_parser.py", line 163, in get_learned_conditioning
        conds = model.get_learned_conditioning(texts)
      File "G:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
        c = self.cond_stage_model(c)
      File "G:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "G:\AI\stable-diffusion-webui\modules\sd_hijack_clip.py", line 234, in forward
        z = self.process_tokens(tokens, multipliers)
      File "G:\AI\stable-diffusion-webui\modules\sd_hijack_clip.py", line 271, in process_tokens
        z = self.encode_with_transformers(tokens)
      File "G:\AI\stable-diffusion-webui\modules\sd_hijack_clip.py", line 324, in encode_with_transformers
        outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
      File "G:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "G:\AI\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 811, in forward
        return self.text_model(
      File "G:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "G:\AI\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 721, in forward
        encoder_outputs = self.encoder(
      File "G:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "G:\AI\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 650, in forward
        layer_outputs = encoder_layer(
      File "G:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "G:\AI\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 389, in forward
        hidden_states = self.mlp(hidden_states)
      File "G:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "G:\AI\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 344, in forward
        hidden_states = self.fc1(hidden_states)
      File "G:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "G:\AI\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 357, in network_Linear_forward
        return network_forward(self, input, torch.nn.Linear_forward_before_network)
      File "G:\AI\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 345, in network_forward
        y = module.forward(y, input)
      File "G:\AI\stable-diffusion-webui\extensions-builtin\Lora\network_lora.py", line 84, in forward
        return y + self.up_model(self.down_model(x)) * self.multiplier() * self.calc_scale()
      File "G:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "G:\AI\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 357, in network_Linear_forward
        return network_forward(self, input, torch.nn.Linear_forward_before_network)
      File "G:\AI\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 337, in network_forward
        y = original_forward(module, input)
      File "G:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
        return F.linear(input, self.weight, self.bias)
    RuntimeError: mat1 and mat2 shapes cannot be multiplied (77x3072 and 768x32)

---

Additional information

No response

catboxanon commented 1 year ago

Closing as duplicate of #12104, but I'll note your comment there.

When using an extension that requires changing the strength of LoRA at each step, it is necessary to enable this option; otherwise, LoRA and Model reloading will be performed every step, resulting in significantly slower generation times