KohakuBlueleaf / a1111-sd-webui-locon

A extension for loading LyCORIS model in sd-webui
Apache License 2.0
501 stars 110 forks source link

`LoraUpDownModel` has no attribute `up_model` #23

Open cppietime opened 1 year ago

cppietime commented 1 year ago

While attempting to use the extension with a Locon model as a Lora, trying to generate a prompt produces the following stacktrace:

Traceback (most recent call last):
  File "F:\StableDiffusion\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "F:\StableDiffusion\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "F:\StableDiffusion\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "F:\StableDiffusion\stable-diffusion-webui\modules\processing.py", line 503, in process_images
    res = process_images_inner(p)
  File "F:\StableDiffusion\stable-diffusion-webui\modules\processing.py", line 642, in process_images_inner
    uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, p.steps, cached_uc)
  File "F:\StableDiffusion\stable-diffusion-webui\modules\processing.py", line 587, in get_conds_with_caching
    cache[1] = function(shared.sd_model, required_prompts, steps)
  File "F:\StableDiffusion\stable-diffusion-webui\modules\prompt_parser.py", line 140, in get_learned_conditioning
    conds = model.get_learned_conditioning(texts)
  File "F:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
    c = self.cond_stage_model(c)
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\StableDiffusion\stable-diffusion-webui\modules\sd_hijack_clip.py", line 229, in forward
    z = self.process_tokens(tokens, multipliers)
  File "F:\StableDiffusion\stable-diffusion-webui\modules\sd_hijack_clip.py", line 254, in process_tokens
    z = self.encode_with_transformers(tokens)
  File "F:\StableDiffusion\stable-diffusion-webui\modules\sd_hijack_clip.py", line 302, in encode_with_transformers
    outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 811, in forward
    return self.text_model(
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 721, in forward
    encoder_outputs = self.encoder(
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 650, in forward
    layer_outputs = encoder_layer(
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 379, in forward
    hidden_states, attn_weights = self.self_attn(
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 268, in forward
    query_states = self.q_proj(hidden_states) * self.scale
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\StableDiffusion\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 305, in lora_Linear_forward
    lora_apply_weights(self)
  File "F:\StableDiffusion\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 273, in lora_apply_weights
    self.weight += lora_calc_updown(lora, module, self.weight)
  File "F:\StableDiffusion\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\main.py", line 612, in lora_calc_updown
    updown = rebuild_weight(module, target)
  File "F:\StableDiffusion\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\main.py", line 536, in rebuild_weight
    up = module.up_model.weight.to(orig_weight.device, dtype=orig_weight.dtype)
AttributeError: 'LoraUpDownModule' object has no attribute 'up_model'

Checkpoint: anythingV3_fp16.ckpt [812cd9f9d9]

KohakuBlueleaf commented 1 year ago

Do you have any other extension for lora? Lot of them are not compatible of my extension

cppietime commented 1 year ago

My extensions are: The builtin:

And:

KohakuBlueleaf commented 1 year ago

What's your lora model? I may need the file that cause error to find where cause the bug

cppietime commented 1 year ago

I tried both versions of this model (NSFW): https://civitai.com/models/19987/beegirlz Both result in the error. After attempting to include this LORA in the prompt, all subsequent prompts result in the same error until restarting, even after removing the LORA

Gourieff commented 1 year ago

So am I, the same error, trying to use with this lora https://civitai.com/models/27615/delicate-armor?modelVersionId=33064

KohakuBlueleaf commented 1 year ago

@Gourieff I will suggest you to use new extension https://github.com/KohakuBlueleaf/a1111-sd-webui-lycoris