hako-mikan / sd-webui-regional-prompter

set prompt to divided region
GNU Affero General Public License v3.0
1.53k stars 126 forks source link

Can't use Attention Generation Mode (using SDXL model.. haven't tried other SD versions yet) .. "IndexError: list index out of range" #297

Closed Manchovies closed 4 months ago

Manchovies commented 7 months ago

Describe the bug Generation Mode: Attention does not work

Environment Web-UI version: v1.6.0-272-gddc2a349 SD Version: SDXL LoRA/LoCon/LoHa: n/a `` Other Enabled Extensions image

Regional Prompter Active, Pos tokens : [117], Neg tokens : [0] 0%| | 0/30 [00:10<?, ?it/s] *** Error completing request *** Arguments: ('task(jo51sr0vvi0es8v)', 'skewed perspective professional art photograph,minimalist art, masterpiece,in front of a blue-tinted leather background, hyperdetailed photography, sharp focus ADDCOM of a (single vantablack metallic chrysanthemum flower:1.2) sculpture with faceted edges and very slight accents of color on the petals, hyperdetailed photography ADDROW a (single vantablack metallic chrysanthemum flower:1.2) sculpture with faceted edges and very slight accents of color on the petals, hyperdetailed photography, metallic flower stem on a dark out-of focus blue leather background', '(((DeviantArt))),sphere, spherical, wooden base, Black and white, glass, sketch, drawing, digital art, bad photography, low contrast, poor composition, unprofessional, amateur, duplicate, double, multiple flowers', [], 30, 'DPM++ 2M Karras', 1, 1, 5, 2016, 1152, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x0000028C3D973AC0>, 0, True, 'C-Drive-SDXLRefiner1.0-pruned-no-ema.safetensors', 0.8, 3802241302, False, -1, 0, 0, 0, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 1536, 96, True, True, True, False, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), None, '', None, True, False, False, False, False, False, 0, 0, '0', 0, False, True, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, 1, 1, '', False, False, False, 1, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, '', None, True, False, False, False, False, False, 0, 0, '0', 0, False, True, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, 1, 1, '', False, False, False, 1, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, '', None, True, False, False, False, False, False, 0, 0, '0', 0, False, True, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, 1, 1, '', False, False, False, 1, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, 1, 1, '', 1, 1, ['After Upscaling/Before Restore Face'], 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, True, False, 7, 8, 0.1, 0.25, 'bicubic', 0.25, 4, True, False, False, 'None', 20, True, False, 'Matrix', 'Rows', 'Mask', 'Prompt', '1.5,1', '0.2', True, True, True, 'Attention', ['[', '"', '[', '"'], '0', '0', '0.4', None, '0', '0', False, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, [], 30, '', 4, [], 1, '', '', '', '', 'Positive', 0, ', ', 'Generate and always save', 32) {} Traceback (most recent call last): File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\modules\call_queue.py", line 36, in f res = func(*args, **kwargs) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img processed = processing.process_images(p) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\modules\processing.py", line 734, in process_images res = process_images_inner(p) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 41, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\modules\processing.py", line 869, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\modules\processing.py", line 1145, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling return func() File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in <lambda> samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 188, in forward x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b])) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps return self.inner_model.apply_model(*args, **kwargs) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\modules\sd_models_xl.py", line 37, in apply_model return self.model(x, t, cond) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1568, in _call_impl result = forward_call(*args, **kwargs) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda> setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__ return self.__orig_func(*args, **kwargs) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\wrappers.py", line 28, in forward return self.diffusion_model( File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward return original_forward(self, x, timesteps, context, *args, **kwargs) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 993, in forward h = module(h, emb, context) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 100, in forward x = layer(x, context) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\repositories\generative-models\sgm\modules\attention.py", line 627, in forward x = block(x, context=context[i]) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\repositories\generative-models\sgm\modules\attention.py", line 459, in forward return checkpoint( File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\util.py", line 165, in checkpoint return CheckpointFunction.apply(func, len(inputs), *args) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py", line 539, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\util.py", line 182, in forward output_tensors = ctx.run_function(*ctx.input_tensors) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\repositories\generative-models\sgm\modules\attention.py", line 478, in _forward self.attn2( File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\extensions\sd-webui-regional-prompter\scripts\attention.py", line 411, in forward ox = matsepcalc(x, contexts, mask, self.pn, 1) File "D:\stable-diffusion-webui-1.5.1 (1)\newstablediffusionwebui\stable-diffusion-webui\extensions\sd-webui-regional-prompter\scripts\attention.py", line 179, in matsepcalc context = contexts[:,tll[i][0] * TOKENSCON:tll[i][1] * TOKENSCON,:] IndexError: list index out of range

hako-mikan commented 6 months ago

I'm unable to grasp the situation. Could you please describe the problem in more detail? Does the issue occur even with simple area specifications as demonstrated in the readme?