hako-mikan / sd-webui-negpip

Extension for Stable Diffusion web-ui enables negative prompt in prompt
GNU Affero General Public License v3.0
200 stars 16 forks source link

当与tiled diffusion(multidiffusion)同时启用时 报错“Sizes of tensors must match except in dimension” #23

Closed yamosin closed 11 months ago

yamosin commented 1 year ago

当negpip与multidiffusion(tiled diffusion功能)同时启用时,提示以下错误 系统环境:Windows 11 WebUI版本:秋叶整合4.4 A41WebUI1.6

To create a public link, set share=True in launch(). [Lobe]: Initializing Lobe Startup time: 32.6s (prepare environment: 9.0s, import torch: 8.1s, import gradio: 1.5s, setup paths: 0.8s, initialize shared: 0.4s, other imports: 0.7s, setup codeformer: 0.1s, load scripts: 8.5s, create ui: 2.4s, gradio launch: 0.5s, app_started_callback: 0.4s). Applying attention optimization: xformers... done. Model loaded in 3.9s (load weights from disk: 0.4s, create model: 0.9s, apply weights to model: 2.3s, load textual inversion embeddings: 0.1s, calculate empty prompt: 0.2s). 2023-10-30 14:23:10,197 - AnimateDiff - INFO - Moving motion module to CPU [Tiled Diffusion] ControlNet found, support is enabled. MultiDiffusion hooked into 'Restart' sampler, Tile size: 96x96, Tile count: 4, Batch size: 4, Tile batches: 1 (ext: ContrlNet)

CD Tuner Effective : [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, -1, 0] NegPiP enable, Positive:[3],Negative:None Error completing request Arguments: ('task(aeicsi7i99ugbex)', 0, 'Exquisite, full body, beautiful, young adult female Anime character, exaggerated features, expressive, pink hair, demon girl, diamond pupils, fluffy tail, cascading hair accessories, (eyeball hair ornament:1.1), beholder eye demon, (color gradient clothes made:1.2), ambient occlusion. Incredibly detailed, Overhead lighting, Cold Colors, Dreamcore, Calotype, Needle sharp, (animal ears:-1)', '[::0.2], Aissist-neg', [], <PIL.Image.Image image mode=RGBA size=1024x1024 at 0x20ECC6B3A30>, None, None, None, None, None, None, 20, 'Restart', 4, 0, 1, 1, 1, 7, 1.5, 0.7, 0, 1024, 1024, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x0000020EC67B2E90>, 0, False, '', 0.8, 210015825, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, True, 'keyword prompt', 'keyword1, keyword2', 'None', 'textual inversion first', 'None', '0.7', 'None', True, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 3072, 192, True, True, True, False, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', <scripts.animatediff_ui.AnimateDiffProcess object at 0x0000020ECC6BC2E0>, False, 'u2net', False, False, 10, 240, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, False, -1, -1, False, '1,1', 'Horizontal', '', 2, 1, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020F249E5060>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020F24A06740>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020EABFF82B0>, [], [], False, 0, 0.8, 0, 0.8, 0.5, False, False, 0.5, 8192, -1.0, True, False, True, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, '* CFG Scale should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 'Positive', 0, ', ', 'Generate and always save', 32, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, '', '', '', '', '', '', None, 1.0, 1, False, False, '', False, 'Normal', 1, True, 1, 1, 'None', False, False, False, 'YuNet', 512, 1024, 0.5, 1.5, False, 'face close up,', 0.5, 0.5, False, True, '', '', '', '', '', 1, 'None', '', '', 1, 'FirstGen', False, False, 'Current', False, 1 2 3 0 , False, '', False, 1, False, False, 30, '', False, False, False, '', False, '', False, '', False, '', False, '', False, None, None, False, None, None, False, None, None, False, 50, '

Will upscale the image depending on the selected target size type

', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {} Traceback (most recent call last): File "D:\sd-webui-aki-v4.4\modules\call_queue.py", line 57, in f res = list(func(
args,
kwargs)) File "D:\sd-webui-aki-v4.4\modules\call_queue.py", line 36, in f res = func(*args, kwargs) File "D:\sd-webui-aki-v4.4\modules\img2img.py", line 208, in img2img processed = process_images(p) File "D:\sd-webui-aki-v4.4\modules\processing.py", line 732, in process_images res = process_images_inner(p) File "D:\sd-webui-aki-v4.4\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, kwargs) File "D:\sd-webui-aki-v4.4\modules\processing.py", line 867, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "D:\sd-webui-aki-v4.4\modules\processing.py", line 1528, in sample samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning) File "D:\sd-webui-aki-v4.4\modules\sd_samplers_kdiffusion.py", line 188, in sample_img2img samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "D:\sd-webui-aki-v4.4\modules\sd_samplers_common.py", line 261, in launch_sampling return func() File "D:\sd-webui-aki-v4.4\modules\sd_samplers_kdiffusion.py", line 188, in samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, *extra_params_kwargs)) File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "D:\sd-webui-aki-v4.4\modules\sd_samplers_extra.py", line 71, in restart_sampler x = heun_step(x, old_sigma, new_sigma) File "D:\sd-webui-aki-v4.4\modules\sd_samplers_extra.py", line 19, in heun_step denoised = model(x, old_sigma * s_in, extra_args) File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "D:\sd-webui-aki-v4.4\modules\sd_samplers_cfg_denoiser.py", line 169, in forward x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in)) File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "D:\sd-webui-aki-v4.4\extensions\multidiffusion-upscaler-for-automatic1111\tile_utils\utils.py", line 249, in wrapper return fn(*args, *kwargs) File "D:\sd-webui-aki-v4.4\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\multidiffusion.py", line 70, in kdiff_forward return self.sample_one_step(x_in, org_func, repeat_func, custom_func) File "D:\sd-webui-aki-v4.4\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\multidiffusion.py", line 165, in sample_one_step x_tile_out = repeat_func(x_tile, bboxes) File "D:\sd-webui-aki-v4.4\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\multidiffusion.py", line 65, in repeat_func return self.sampler_forward(x_tile, sigma_tile, cond=cond_tile) File "D:\sd-webui-aki-v4.4\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input c_in, self.sigma_to_t(sigma), kwargs) File "D:\sd-webui-aki-v4.4\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps return self.inner_model.apply_model(*args, kwargs) File "D:\sd-webui-aki-v4.4\modules\sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda *args, *kwargs: self(args, kwargs)) File "D:\sd-webui-aki-v4.4\modules\sd_hijack_utils.py", line 28, in call return self.__orig_func(args, kwargs) File "D:\sd-webui-aki-v4.4\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, cond) File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "D:\sd-webui-aki-v4.4\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward out = self.diffusion_model(x, t, context=cc) File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1538, in _call_impl result = forward_call(*args, *kwargs) File "D:\sd-webui-aki-v4.4\modules\sd_unet.py", line 91, in UNetModel_forward return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, args, kwargs) File "D:\sd-webui-aki-v4.4\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward h = module(h, emb, context) File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "D:\sd-webui-aki-v4.4\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward x = layer(x, context) File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "D:\sd-webui-aki-v4.4\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward x = block(x, context=context[i]) File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "D:\sd-webui-aki-v4.4\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint) File "D:\sd-webui-aki-v4.4\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint return CheckpointFunction.apply(func, len(inputs), args) File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\autograd\function.py", line 506, in apply return super().apply(args, kwargs) # type: ignore[misc] File "D:\sd-webui-aki-v4.4\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward output_tensors = ctx.run_function(ctx.input_tensors) File "D:\sd-webui-aki-v4.4\python\lib\site-packages\tomesd\patch.py", line 63, in _forward x = u_c(self.attn2(m_c(self.norm2(x)), context=context)) + x File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "D:\sd-webui-aki-v4.4\extensions\sd-webui-negpip\scripts\negpip.py", line 330, in forward return sub_forward(x, context, mask, additional_tokens, n_times_crossframe_attn_in_self,self.conds[0],self.contokens[0],self.unconds[0],self.untokens[0]) File "D:\sd-webui-aki-v4.4\extensions\sd-webui-negpip\scripts\negpip.py", line 311, in sub_forward context = torch.cat([context,conds],1) RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 8 but got size 1 for tensor number 1 in the list.

经测试,关闭negpip后,tiled diffusion功能正常无报错