hako-mikan / sd-webui-regional-prompter

set prompt to divided region
GNU Affero General Public License v3.0
1.57k stars 131 forks source link

error when using two different loras by enabling the latent option #277

Closed CCPRAICES closed 6 months ago

CCPRAICES commented 10 months ago

error when using two different loras by activating the latent option. diffusion 1.6 is stable, gpu 3090 rtx, with the attention option the error does not occur, and if I use a single lora in latent, neither does it happen.. thanks for everything!

2023-11-30 18:46:45,467 - ControlNet - INFO - Loading model from cache: control_v11p_sd15_openpose [cab727d4] 2023-11-30 18:46:45,469 - ControlNet - INFO - Loading preprocessor: openpose_full 2023-11-30 18:46:45,469 - ControlNet - INFO - preprocessor resolution = 512 2023-11-30 18:46:45,504 - ControlNet - INFO - ControlNet Hooked - Time = 0.047872304916381836 1,1 0.2 Horizontal Regional Prompter Active, Pos tokens : [19, 18], Neg tokens : [80] Error completing request Arguments: ('task(byxt8wjldd0upcm)', 'a man, master piece, raytracing, ((gerardo)), (((8k, detailed, sharpened))) \n ADDCOL \na man, master piece, raytracing, ((jose)), (((8k, detailed, sharpened))) ', 'BadDream:1.4, deformed, bad quality', [], 6, 'Euler a', 2, 8, 1.3, 512, 720, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x0000021446D36FB0>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, True, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, UiControlNetUnit(enabled=True, module='openpose_full', model='control_v11p_sd15_openpose [cab727d4]', weight=1, image={'image': array([[[ 11, 20, 7], [ 10, 20, 7], [ 6, 19, 4], ..., [254, 255, 254], [253, 254, 255], [255, 254, 254]],


[[ 5, 17, 6], [ 8, 19, 7], [ 10, 21, 6], ..., [255, 254, 254], [255, 255, 255], *** [253, 253, 254]],


[[ 2, 11, 1], [ 8, 17, 5], [ 10, 21, 5], ..., [254, 254, 255], [253, 255, 255], *** [255, 255, 255]],


*** ...,


[[ 49, 35, 26], [ 48, 36, 29], [ 50, 37, 29], ..., [ 74, 91, 90], [ 73, 76, 71], *** [ 80, 75, 60]],


[[ 48, 36, 24], [ 46, 35, 24], [ 47, 37, 27], ..., [ 49, 73, 71], [ 58, 74, 69], *** [ 73, 72, 60]],


[[ 47, 35, 26], [ 48, 35, 25], [ 49, 36, 28], ..., [ 48, 64, 61], [ 60, 74, 64], [ 68, 70, 56]]], dtype=uint8), 'mask': array([[[0, 0, 0], [0, 0, 0], [0, 0, 0], ..., [0, 0, 0], [0, 0, 0], *** [0, 0, 0]],


[[0, 0, 0], [0, 0, 0], [0, 0, 0], ..., [0, 0, 0], [0, 0, 0], *** [0, 0, 0]],


[[0, 0, 0], [0, 0, 0], [0, 0, 0], ..., [0, 0, 0], [0, 0, 0], *** [0, 0, 0]],


*** ...,


[[0, 0, 0], [0, 0, 0], [0, 0, 0], ..., [0, 0, 0], [0, 0, 0], *** [0, 0, 0]],


[[0, 0, 0], [0, 0, 0], [0, 0, 0], ..., [0, 0, 0], [0, 0, 0], *** [0, 0, 0]],


[[0, 0, 0], [0, 0, 0], [0, 0, 0], ..., [0, 0, 0], [0, 0, 0], [0, 0, 0]]], dtype=uint8)}, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=True, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), True, False, 'Matrix', 'Columns', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Latent', [False], '0', '0', '0.4', None, '0', '0', False, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, [], 30, '', 4, [], 1, '', '', '', '') {} Traceback (most recent call last): File "M:\IA Stable Diffusion\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(args, kwargs)) File "M:\IA Stable Diffusion\stable-diffusion-webui\modules\call_queue.py", line 36, in f res = func(*args, kwargs) File "M:\IA Stable Diffusion\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img processed = processing.process_images(p) File "M:\IA Stable Diffusion\stable-diffusion-webui\modules\processing.py", line 732, in process_images res = process_images_inner(p) File "M:\IA Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, *kwargs) File "M:\IA Stable Diffusion\stable-diffusion-webui\modules\processing.py", line 856, in process_images_inner p.setup_conds() File "M:\IA Stable Diffusion\stable-diffusion-webui\modules\processing.py", line 1309, in setup_conds super().setup_conds() File "M:\IA Stable Diffusion\stable-diffusion-webui\modules\processing.py", line 469, in setup_conds self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, total_steps, [self.cached_uc], self.extra_network_data) File "M:\IA Stable Diffusion\stable-diffusion-webui\modules\processing.py", line 455, in get_conds_with_caching cache[1] = function(shared.sd_model, required_prompts, steps, hires_steps, shared.opts.use_old_scheduling) File "M:\IA Stable Diffusion\stable-diffusion-webui\modules\prompt_parser.py", line 189, in get_learned_conditioning conds = model.get_learned_conditioning(texts) File "M:\IA Stable Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning c = self.cond_stage_model(c) File "M:\IA Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "M:\IA Stable Diffusion\stable-diffusion-webui\modules\sd_hijack_clip.py", line 234, in forward z = self.process_tokens(tokens, multipliers) File "M:\IA Stable Diffusion\stable-diffusion-webui\modules\sd_hijack_clip.py", line 273, in process_tokens z = self.encode_with_transformers(tokens) File "M:\IA Stable Diffusion\stable-diffusion-webui\modules\sd_hijack_clip.py", line 326, in encode_with_transformers outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers) File "M:\IA Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "M:\IA Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward return self.text_model( File "M:\IA Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "M:\IA Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 740, in forward encoder_outputs = self.encoder( File "M:\IA Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "M:\IA Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 654, in forward layer_outputs = encoder_layer( File "M:\IA Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "M:\IA Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 383, in forward hidden_states, attn_weights = self.self_attn( File "M:\IA Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "M:\IA Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 272, in forward query_states = self.q_proj(hidden_states) self.scale File "M:\IA Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "M:\IA Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-regional-prompter\scripts\latent.py", line 488, in h_Linear_forward return networks.network_forward(self, input, networks.originals.Linear_forward) File "M:\IA Stable Diffusion\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 413, in network_forward y = module.forward(input, y) File "M:\IA Stable Diffusion\stable-diffusion-webui\extensions-builtin\Lora\network.py", line 157, in forward raise NotImplementedError() NotImplementedError


hako-mikan commented 10 months ago

Is the LoRA you added LyCORIS? If so, it's a bug on the web-ui side. In that case, please enable the Use LoHa or other option. Generation is possible, but the generation speed will be slower.

Prince-Mars commented 10 months ago

Is the LoRA you added LyCORIS? If so, it's a bug on the web-ui side. In that case, please enable the Use LoHa or other option. Generation is possible, but the generation speed will be slower.

I have the same error. Corrected by re-enable support for a1111-sd-webui-lycoris. But I hae new problem with sd-webui-lora-block-weight...

Prince-Mars commented 10 months ago

enable the Use LoHa or other option

but this slows down the generation too much. Can you fix the bug? Please.

hako-mikan commented 10 months ago

I asked the developer of the LyCORIS-related module on the Web-UI to implement it to work with LyCORIS. He would do that.

Prince-Mars commented 8 months ago

I asked the developer of the LyCORIS-related module on the Web-UI to implement it to work with LyCORIS. He would do that.

Has the bug been fixed?

hako-mikan commented 6 months ago

The corrections have been made, and no errors are occurring now, but it doesn't seem to be as fast as expected. Given LoHA's complex structure, this might be unavoidable. When you want to describe multiple characters at the same time, it's recommended to use LoCON or to ensure good training responses with LoHa by conducting effective learning practices. Depending on the training approach, it's possible to train a single LoRA on multiple characters and costumes, allowing them to be invoked through prompts. In this case, differentiation is possible even in Attention mode.