TheLastBen / fast-stable-diffusion

fast-stable-diffusion + DreamBooth
MIT License
7.56k stars 1.31k forks source link

Error Open Pose 3D to Txt2img controlnet #1783

Open putuoka opened 1 year ago

putuoka commented 1 year ago

Error completing request Arguments: ('task(bjfwfefust0hdum)', '{best quality}, {{masterpiece}}, {an extremely delicate and beautiful},Outstanding light and shadow, extremely detailed wallpaper,Clear and bright sunlight,head portrait,1girl,big breast,Blush,short wave hair,Strong sunlight,,blue shirt,suit,Delicate hair,Real skin texture, sagging chest,clear pores and skin wrinkles,Wide shoulders,window,((strong light shines on the face)), upper body, ,', 'by bad-picture-chill-75v, umbrella, easynegative, paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, glans,extra fingers,fewer fingers,strange fingers,bad hand,bad eyes,missing legs,extra arms,extra legs,extra toes,penis,extra limbs,extra vaginal,bad vaginal,Futanari,Man,ugly, fat, anorexic, blur, warping, grayscale, necklace, (piercings), innie, mirror, DAZ 3D, anime, animated, holding, contortion, warped body, spun around, clothes, panties, bra, bikini,canvas frame, cartoon, 3d, ((disfigured)), ((bad art)), ((deformed)),((extra limbs)),((close up)),((b&w)), wierd colors, blurry, (((duplicate))), ((morbid)), ((mutilated)), [out of frame], extra fingers, mutated hands, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), red hairy crotch pussy(((long neck))), Photoshop, video game, ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, mutation, mutated, extra limbs, extra legs, extra arms, disfigured, deformed, cross-eye, body out of frame, blurry, bad art, bad anatomy, 3d render, (text), (watermark), connected bodies, penis:1.1, dildo, phallus, phallic, dick, cock, cocksucking, cocksucker,mutated hands,mutated legs,cum, semen ,Contains sperm,mature,((peeing fluid simulation from anus:1.2)),((Peeing urine out of anus:1.2)),rib cage,', [], 60, 19, False, False, 1, 1, 7.5, -1.0, -1.0, 0, 0, 0, False, 640, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, <scripts.external_code.ControlNetUnit object at 0x7f4e0bacf940>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0, None, 50) {} Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 56, in f res = list(func(*args, kwargs)) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 37, in f res = func(*args, *kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img processed = process_images(p) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 486, in process_images res = process_images_inner(p) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 636, in process_images_inner samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 836, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers_compvis.py", line 201, in sample samples_ddim = self.launch_sampling(steps, lambda: self.sampler.sample(S=steps, conditioning=conditioning, batch_size=int(x.shape[0]), shape=x[0].shape, verbose=False, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning, x_T=x, eta=self.eta)[0]) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers_compvis.py", line 51, in launch_sampling return func() File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers_compvis.py", line 201, in samples_ddim = self.launch_sampling(steps, lambda: self.sampler.sample(S=steps, conditioning=conditioning, batch_size=int(x.shape[0]), shape=x[0].shape, verbose=False, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning, x_T=x, eta=self.eta)[0]) File "/usr/local/lib/python3.9/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(args, kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/models/diffusion/uni_pc/sampler.py", line 98, in sample x = uni_pc.sample(img, steps=S, skip_type=shared.opts.uni_pc_skip_type, method="multistep", order=shared.opts.uni_pc_order, lower_order_final=shared.opts.uni_pc_lower_order_final) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/models/diffusion/uni_pc/uni_pc.py", line 758, in sample model_prev_list = [self.model_fn(x, vec_t)] File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/models/diffusion/uni_pc/uni_pc.py", line 453, in model_fn return self.data_prediction_fn(x, t) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/models/diffusion/uni_pc/uni_pc.py", line 437, in data_prediction_fn noise = self.noise_prediction_fn(x, t) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/models/diffusion/uni_pc/uni_pc.py", line 431, in noise_prediction_fn return self.model(x, t) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/models/diffusion/uni_pc/uni_pc.py", line 417, in model res = self.modelfn(x, t, cond, uncond) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/models/diffusion/uni_pc/uni_pc.py", line 362, in model_fn noise_uncond, noise = noise_pred_fn(x_in, t_in, cond=c_in).chunk(2) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/models/diffusion/uni_pc/uni_pc.py", line 297, in noise_pred_fn output = model(x, t_input, cond, model_kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/models/diffusion/uni_pc/sampler.py", line 88, in lambda x, t, c: self.model.apply_model(x, t, c), File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda *args, *kwargs: self(args, kwargs)) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack_utils.py", line 28, in call return self.__orig_func(args, kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, cond) File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(input, kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 1329, in forward out = self.diffusion_model(x, t, context=cc) File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, *kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 233, in forward2 return forward(args, kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 176, in forward control = param.control_model(x=x_in, hint=param.hint_cond, timesteps=timesteps, context=context) File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/cldm.py", line 115, in forward return self.control_model(*args, *kwargs) File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(input, kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/cldm.py", line 383, in forward h = module(h, emb, context) File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/openaimodel.py", line 84, in forward x = layer(x, context) File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, *kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/attention.py", line 334, in forward x = block(x, context=context[i]) File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(input, kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/attention.py", line 269, in forward return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/util.py", line 114, in checkpoint return CheckpointFunction.apply(func, len(inputs), args) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/util.py", line 129, in forward output_tensors = ctx.run_function(ctx.input_tensors) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/attention.py", line 273, in _forward x = self.attn2(self.norm2(x), context=context) + x File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, *kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 332, in xformers_attention_forward k_in = self.to_k(context_k) File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(input, **kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 197, in lora_Linear_forward return lora_forward(self, input, torch.nn.Linear_forward_before_lora(self, input)) File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) RuntimeError: mat1 and mat2 shapes cannot be multiplied (308x1024 and 768x320)

image

putuoka commented 1 year ago

alright it seem like the model sd2.1 isn't support controlnet

TheLastBen commented 1 year ago

will add controlnet 2.1 soon

iaclaudioia8 commented 1 year ago

Is it already added? I still getting the same issue, Thank you, Could I download it and add it manually?

TheLastBen commented 1 year ago

there is controlnet v2.1 models in the notebook