hnmr293 / sd-webui-cutoff

Cutoff - Cutting Off Prompt Effect
Other
1.2k stars 85 forks source link

AssertionError #17

Open Enferlain opened 1 year ago

Enferlain commented 1 year ago

Wanted to try it today for something, returns this error when generating. Worked fine like a week ago I think

Traceback (most recent call last):
  File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/modules/call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/modules/call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/modules/processing.py", line 486, in process_images
    res = process_images_inner(p)
  File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 42, in processing_process_images_hijack
    return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/modules/processing.py", line 626, in process_images_inner
    c = get_conds_with_caching(prompt_parser.get_multicond_learned_conditioning, prompts, p.steps, cached_c)
  File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/modules/processing.py", line 570, in get_conds_with_caching
    cache[1] = function(shared.sd_model, required_prompts, steps)
  File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/modules/prompt_parser.py", line 205, in get_multicond_learned_conditioning
    learned_conditioning = get_learned_conditioning(model, prompt_flat_list, steps)
  File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/modules/prompt_parser.py", line 140, in get_learned_conditioning
    conds = model.get_learned_conditioning(texts)
  File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 669, in get_learned_conditioning
    c = self.cond_stage_model(c)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1215, in _call_impl
    hook_result = hook(self, input, result)
  File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/extensions/sd-webui-cutoff/scripts/cutoff.py", line 142, in hook
    assert tensor.shape == t.shape
AssertionError
hnmr293 commented 1 year ago

Thank you for your feedback.

  1. Let me know your generation information, especially the prompt, the negative prompt and cutoff tokens.
  2. Let me know your WebUI version - A1111, vladmandic or others, and its commit number.
cong-bao commented 11 months ago

@hnmr293 I'm sorry I used Machine translation, but it may not be very clear

Hello, I have also experienced this situation. I found that this error occurs when the length of my prompt or Negative prompt exceeds 150.

I used the content of Negative prompt when testing prompt

Negative prompt: ((((ugly)))),lowres, bad anatomy, [:((No more than one thumb, index finger, middle finger, ring finger and little finger on one hand),(mutated hands and fingers:1.5 ), fused ears, one hand with more than 5 fingers, one hand with less than 5 fingers,):0.5] bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, Missing limbs, three arms, bad feet, text font ui, signature, blurry, malformed hands, long neck, mutated hands and fingers :1.5).(long body :1.3),(mutation ,poorly drawn :1.2), disfigured, malformed, mutated, multiple breasts, yaoi, three legs,

Finally, this is the version of the extension I am using.

https://github.com/7eu7d7/DreamArtist-sd-webui-extension/commit/12f8077517b11199802f8d448d36ea573debae96 https://github.com/KohakuBlueleaf/a1111-sd-webui-haku-img/commit/5069e2b2ba71e3ab2f8172bbb819996ef1041427 https://github.com/KohakuBlueleaf/a1111-sd-webui-lycoris/commit/4b005e3381ca9b41e1129fd213de96f9ea58fc87 https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/commit/1cb4fc8f2572545418fe988edd8815a20e6b0e28 https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111/commit/70b3c5ea3c9f684d04e7ff59167565974415735c https://github.com/nonnonstop/sd-webui-3d-open-pose-editor/commit/f2d5aac51d891bc5f266b1549f3cf4495fc52160 https://github.com/Mikubill/sd-webui-controlnet/commit/d8551e447d8718e15b8ff5de04036d3fd1b3c5ce https://github.com/hnmr293/sd-webui-cutoff/commit/3a073e9c4525c21b72ae4645125768457c9c98e1 https://github.com/zanllp/sd-webui-infinite-image-browsing/commit/55d154e3ee661633bc9c24056c7dffa16819ebfb https://github.com/Physton/sd-webui-prompt-all-in-one/commit/937e45be0bfad20f79bf3b8e7e05f632da5f391d https://github.com/d8ahazard/sd_dreambooth_extension/commit/b396af26b7906aa82a29d8847e756396cb2c28fb https://github.com/toriato/stable-diffusion-webui-wd14-tagger/commit/3ba3a7356447e91c15ffb6d01ca61f878a2292a8

aigias commented 11 months ago

I once found the same issue. In my case, it was caused by a certain prompt. After I removed it, problem resolved. I guess it's not because that certain words but something wrong was pasted in the prompt box which is unseen. You can try remove each tag and regenerate the image to see if it can be fixed. This is what I found, or they're not the same cause.

greasyi commented 5 months ago

Yes, sometimes just 1 token is the difference between getting an error or not. I don't think I had any other plugins running. Options for the plugin don't seem make a difference, once a prompt is "bad", it will repro 100% of the time as long as there is 1 target token which matches 1 thing in the prompt.

If I add 5-10 commas to the prompt, the issue goes away.

AssertionError: tensor and t must have same shape
    tensor: torch.Size([154, 768])
     t:torch.Size([77, 768])

The numbers 77 and 154 are suspicious to me, because multiples of 75 are about where prompts get split up. Could be that a mis-match in pre- and post-cutoff prompt lengths is the issue.