AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
140.99k stars 26.67k forks source link

[Bug]: RuntimeError: The size of tensor a (166) must match the size of tensor b (77) at non-singleton dimension 1 #9557

Open oliverban opened 1 year ago

oliverban commented 1 year ago

Is there an existing issue for this?

What happened?

Tried to regen a previous pic that worked fine about a week ago. Then I noticed a bunch of these errors whenever I have embeddings in the negative prompt. Looks like there is an overflow issue or something in the LATEST update. A1111 worked fine before easter, about a week ago.

Steps to reproduce the problem

  1. Update to latest commit
  2. Bring in earlier render that works fine and read the gen params from it. Make sure to use LORAS and embeddings in both positive and negative prompt
  3. Hit Generate and get the error.

What should have happened?

The image should render an image identical to the previous render.

Commit where the problem happens

22bcc7be428c94e9408f589966c2040187245d81

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Mozilla Firefox, Brave

Command Line Arguments

--opt-sdp-no-mem-attention --no-half-vae --deepdanbooru

List of extensions

sd-webui-additional-networks BUT have tested without

Console logs

Traceback (most recent call last):
  File "C:\Users\olive\Documents\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "C:\Users\olive\Documents\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "C:\Users\olive\Documents\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "C:\Users\olive\Documents\stable-diffusion-webui\modules\processing.py", line 503, in process_images
    res = process_images_inner(p)
  File "C:\Users\olive\Documents\stable-diffusion-webui\modules\processing.py", line 642, in process_images_inner
    uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, p.steps, cached_uc)
  File "C:\Users\olive\Documents\stable-diffusion-webui\modules\processing.py", line 587, in get_conds_with_caching
    cache[1] = function(shared.sd_model, required_prompts, steps)
  File "C:\Users\olive\Documents\stable-diffusion-webui\modules\prompt_parser.py", line 140, in get_learned_conditioning
    conds = model.get_learned_conditioning(texts)
  File "C:\Users\olive\Documents\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
    c = self.cond_stage_model(c)
  File "C:\Users\olive\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\olive\Documents\stable-diffusion-webui\modules\sd_hijack_clip.py", line 210, in forward
    return modules.sd_hijack_clip_old.forward_old(self, texts)
  File "C:\Users\olive\Documents\stable-diffusion-webui\modules\sd_hijack_clip_old.py", line 81, in forward_old
    return self.process_tokens(remade_batch_tokens, batch_multipliers)
  File "C:\Users\olive\Documents\stable-diffusion-webui\modules\sd_hijack_clip.py", line 254, in process_tokens
    z = self.encode_with_transformers(tokens)
  File "C:\Users\olive\Documents\stable-diffusion-webui\modules\sd_hijack_clip.py", line 302, in encode_with_transformers
    outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
  File "C:\Users\olive\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\olive\Documents\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 811, in forward
    return self.text_model(
  File "C:\Users\olive\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\olive\Documents\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 708, in forward
    hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
  File "C:\Users\olive\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\olive\Documents\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 226, in forward
    embeddings = inputs_embeds + position_embeddings
RuntimeError: The size of tensor a (166) must match the size of tensor b (77) at non-singleton dimension 1

Additional information

No response

oliverban commented 1 year ago

EDIT: USER ERROR!

I had accidentally changed some settings under the Compatibility tab. Reverting the changes have fixed the error! This can be closed!!!!

mattehicks commented 1 year ago

I'm getting the same error, it is resolved when I remove the offending Lora. I'd like to be able to help fix this code, can someone point me in the right direction?

Context: I'm running XYZ with checkpoints list, and it is breaking with non-compatible models. Can we code this so that it 'omits' the lora from the prompt when not compatible??