hako-mikan / sd-webui-regional-prompter

set prompt to divided region
GNU Affero General Public License v3.0
1.58k stars 133 forks source link

LoRA doesn't work well (module 'lora' has no attribute 'lora_forward') #237

Closed caotranduy closed 11 months ago

caotranduy commented 1 year ago

I'm using sd on google colab and using latent RP mode, having these setting like below:

image

but it always generate only 1 image before encounter this error, everytime:

image

The error remains even when attention mode is on or RP is disable, only restarting webUI temporarily fix it

here is what the terminal has shown:

Warning: Nonstandard height / width. Warning: Nonstandard height / width for ulscaled size [1] [[1.0, 1.0, 1.0]] 1,1,1 0.2 Horizontal Regional Prompter Active, Pos tokens : [50, 54, 51], Neg tokens : [0] Error completing request Arguments: ('task(og4ayn9i3dmvkzd)', 'masterpiece, best quality, (anti-aliasing:1.4), sigma 400mm f1.8, photo fine print, amazing sharp focus, ultra detailed, 2d, outdoor, group photo, 3girls ADDCOMM\n1girl, shenhedef, happy ADDCOL\n1girl, kamisatoayakadef, happy ADDCOL\n1girl, kokomidef, happy', '(extra fingers, deformed hands, polydactyl:1.4), (monochrome:1.4), (greyscale), (worst quality, low quality:1.4), high contrast, (realistic, lip, nose, tooth, rouge, lipstick, eyeshadow:1.0), (dusty sunbeams:1.0), (abs, muscular, rib:1.0), censor, bar censor, mosaic censor, dutch angle, white borders, multiple views, heart censor, jpeg artifacts, grids, watermark, logo, username, text, flowers, particles, (missing fingers:1.4), (extra nipples:1.4)', [], 20, 0, False, False, 1, 1, 10.5, -1.0, -1.0, 0, 0, 0, False, 420, 580, True, 0.5, 2.5, 'R-ESRGAN 4x+ Anime6B', 0, 0, 0, [], 0, <controlnet.py.UiControlNetUnit object at 0x789397babe50>, True, False, 'Matrix', 'Columns', 'Mask', 'Prompt', '1,1,1', '0.2', False, True, True, 'Latent', False, '0', '0', '0.5', None, '20', '20', False, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, False, 50, [], 30, '', 4, [], 1, '', '', '', '') {} Traceback (most recent call last): File "/content/lite_rosenstein/modules/call_queue.py", line 57, in f res = list(func(*args, kwargs)) File "/content/lite_rosenstein/modules/call_queue.py", line 37, in f res = func(*args, *kwargs) File "/content/lite_rosenstein/modules/txt2img.py", line 56, in txt2img processed = process_images(p) File "/content/lite_rosenstein/modules/processing.py", line 515, in process_images res = process_images_inner(p) File "/content/lite_rosenstein/extensions/lite-kaggle-controlnet/scripts/batch_hijack.py", line 42, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, args, kwargs) File "/content/lite_rosenstein/modules/processing.py", line 658, in process_images_inner uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, p.steps step_multiplier, cached_uc) File "/content/lite_rosenstein/modules/processing.py", line 597, in get_conds_with_caching cache[1] = function(shared.sd_model, required_prompts, steps) File "/content/lite_rosenstein/modules/prompt_parser.py", line 140, in get_learned_conditioning conds = model.get_learned_conditioning(texts) File "/content/lite_rosenstein/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 669, in get_learned_conditioning c = self.cond_stage_model(c) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/content/lite_rosenstein/modules/sd_hijack_clip.py", line 229, in forward z = self.process_tokens(tokens, multipliers) File "/content/lite_rosenstein/modules/sd_hijack_clip.py", line 254, in process_tokens z = self.encode_with_transformers(tokens) File "/content/lite_rosenstein/modules/sd_hijack_clip.py", line 302, in encode_with_transformers outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py", line 811, in forward return self.text_model( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py", line 721, in forward encoder_outputs = self.encoder( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "/usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py", line 650, in forward layer_outputs = encoder_layer( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py", line 379, in forward hidden_states, attn_weights = self.self_attn( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py", line 268, in forward query_states = self.q_proj(hidden_states) self.scale File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, **kwargs) File "/content/lite_rosenstein/extensions/sd-webui-regional-prompter/scripts/latent.py", line 503, in h_Linear_forward return lora.lora_forward(self, input, torch.nn.Linear_forward_before_lora) AttributeError: module 'lora' has no attribute 'lora_forward'

wish to have solution

hako-mikan commented 1 year ago

Tell me web-ui version and other extensions use with.