Closed ZCryler1 closed 1 year ago
Restart your WebUI may help.
i need to draw the whole image for it to work
I was getting this error too when using the OpenPose Editor addon, I selected openpose as the preprocessor instead of "none"
same here, I am using chilloutmix model + openpose in controlnet
Update: it is a stupid internet connection problem again, I solved it by download "body_pose_model.pth" and "hand_pose_model.pth" from https://huggingface.co/lllyasviel/ControlNet/tree/main/annotator/ckpts
and place these two files in \stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\openpose then it works for me.
Had also been dabbling with openpose prior to getting this error. Disabled controlnet via the enable button, removed images and changed preprocessor and model to none but still recieve the error. Restarting webui worked, seems openpose doesn't "de-load" itself or something?
same issue while using img2img, commenting for the updates.
Traceback (most recent call last): File "B:\A.I\stable-diffusion-webui\modules\scripts.py", line 386, in process script.process(p, *script_args) File "B:\A.I\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 487, in process detected_map = preprocessor(input_image, res=pres, thr_a=pthr_a, thr_b=pthrb) File "B:\A.I\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\processor.py", line 130, in openpose result, = model_openpose(img, has_hand) File "B:\A.I\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\openpose__init.py", line 40, in apply_openpose candidate, subset = body_estimation(oriImg) File "B:\A.I\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\openpose\body.py", line 63, in call__ paf = cv2.resize(paf, (oriImg.shape[1], oriImg.shape[0]), interpolation=cv2.INTER_CUBIC) cv2.error: OpenCV(4.7.0) D:\a\opencv-python\opencv-python\opencv\modules\core\src\alloc.cpp:73: error: (-4:Insufficient memory) Failed to allocate 139460608 bytes in function 'cv::OutOfMemoryError'
0%| | 0/16 [00:00<?, ?it/s]Error executing callback cfg_denoiser_callback for B:\A.I\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py Traceback (most recent call last): File "B:\A.I\stable-diffusion-webui\modules\script_callbacks.py", line 161, in cfg_denoiser_callback c.callback(params) File "B:\A.I\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 121, in guidance_schedule_handler self.guidance_stopped = (x.sampling_step / x.total_sampling_steps) > self.stop_guidance_percent File "B:\A.I\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1269, in getattr raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'PlugableControlModel' object has no attribute 'stop_guidance_percent'
0%| | 0/16 [00:07<?, ?it/s] Error completing request Arguments: ('task(jm5doxexoshz88o)', 0, 'masterpiece, best quality, illustration, upper body, 1boy walking, looking at viewer, green hair, medium hair, yellow eyes, demon horns, black coat,cyberpunk city, trending on artstation,4k,', 'lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name', [], <PIL.Image.Image image mode=RGBA size=259x898 at 0x1A052318100>, None, None, None, None, None, None, 20, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.75, -1.0, -1.0, 0, 0, 0, False, 512, 512, 0, 0, 32, 0, '', '', '', [], 0, True, 'openpose', 'control_sd15_openpose [fef5e48e]', 1, {'image': array([[[132, 169, 238], [132, 169, 238], [132, 169, 238], ..., [132, 169, 238], [132, 169, 238], [132, 169, 238]],
Traceback (most recent call last): File "B:\A.I\stable-diffusion-webui\modules\call_queue.py", line 56, in f res = list(func(*args, kwargs)) File "B:\A.I\stable-diffusion-webui\modules\call_queue.py", line 37, in f res = func(*args, kwargs) File "B:\A.I\stable-diffusion-webui\modules\img2img.py", line 171, in img2img processed = process_images(p) File "B:\A.I\stable-diffusion-webui\modules\processing.py", line 486, in process_images res = process_images_inner(p) File "B:\A.I\stable-diffusion-webui\modules\processing.py", line 632, in process_images_inner samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts) File "B:\A.I\stable-diffusion-webui\modules\processing.py", line 1048, in sample samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning) File "B:\A.I\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 322, in sample_img2img samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "B:\A.I\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 225, in launch_sampling return func() File "B:\A.I\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 322, in
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, *extra_params_kwargs))
File "B:\A.I\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(args, kwargs)
File "B:\A.I\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, extra_args)
File "B:\A.I\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, *kwargs)
File "B:\A.I\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 123, in forward
x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond={"c_crossattn": [cond_in[a:b]], "c_concat": [image_cond_in[a:b]]})
File "B:\A.I\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(input, kwargs)
File "B:\A.I\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), kwargs)
File "B:\A.I\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, *kwargs)
File "B:\A.I\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda args, kwargs: self(*args, kwargs))
File "B:\A.I\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(args, kwargs)
File "B:\A.I\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, cond)
File "B:\A.I\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1212, in _call_impl
result = forward_call(input, kwargs)
File "B:\A.I\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1329, in forward
out = self.diffusion_model(x, t, context=cc)
File "B:\A.I\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, *kwargs)
File "B:\A.I\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 168, in forward2
return forward(args, **kwargs)
File "B:\A.I\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 125, in forward
assert outer.hint_cond is not None, f"Controlnet is enabled but no input image is given"
AssertionError: Controlnet is enabled but no input image is given