Closed Adel1525 closed 1 year ago
Please share the entire stack trace of the error. Ideally you should not bypass the bug report format, as it makes it harder for maintainers to understand what the actual problem is.
@ljleb since I have the exact same error, I'm posting my logs here:
Arguments: ('task(9mdhnpg0j7atr0c)', 'an overjoyed girl in a black leotard, pink coat, pink skirt and cat ears is standing in front of a lightning background with her hands up, green eyes, black collar, Chizuko Yoshida, an anime drawing, shock art, lightning', 'easynegative', [], 20, 0, False, False, 1, 4, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, <controlnet.py.UiControlNetUnit object at 0x7fc7e7db1460>, <controlnet.py.UiControlNetUnit object at 0x7fc7dc8032e0>, <controlnet.py.UiControlNetUnit object at 0x7fc7e827b2e0>, <controlnet.py.UiControlNetUnit object at 0x7fc7e7da4040>, <controlnet.py.UiControlNetUnit object at 0x7fc7e7db1bb0>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, None, False, None, None, False, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "/home/semanual/ssd/stable-diffusion-webui/modules/call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "/home/semanual/ssd/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/home/semanual/ssd/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
processed = process_images(p)
File "/home/semanual/ssd/stable-diffusion-webui/modules/processing.py", line 526, in process_images
res = process_images_inner(p)
File "/home/semanual/ssd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "/home/semanual/ssd/stable-diffusion-webui/modules/processing.py", line 680, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "/home/semanual/ssd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 269, in process_sample
return process.sample_before_CN_hack(*args, **kwargs)
File "/home/semanual/ssd/stable-diffusion-webui/modules/processing.py", line 907, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "/home/semanual/ssd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 377, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/home/semanual/ssd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 251, in launch_sampling
return func()
File "/home/semanual/ssd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 377, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/home/semanual/ssd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/semanual/ssd/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 145, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "/home/semanual/ssd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/semanual/ssd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 135, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict([cond_in], image_cond_in))
File "/home/semanual/ssd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/semanual/ssd/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "/home/semanual/ssd/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "/home/semanual/ssd/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "/home/semanual/ssd/stable-diffusion-webui/modules/sd_hijack_utils.py", line 28, in __call__
return self.__orig_func(*args, **kwargs)
File "/home/semanual/ssd/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "/home/semanual/ssd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/semanual/ssd/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "/home/semanual/ssd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/semanual/ssd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 535, in forward_webui
return forward(*args, **kwargs)
File "/home/semanual/ssd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 374, in forward
control = param.control_model(x=x_in, hint=hint, timesteps=timesteps, context=context)
File "/home/semanual/ssd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/semanual/ssd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/cldm.py", line 99, in forward
return self.control_model(*args, **kwargs)
File "/home/semanual/ssd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/semanual/ssd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/cldm.py", line 358, in forward
File "/home/semanual/ssd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/cldm.py", line 344, in align
b, c, h1, w1 = hint.shape
ValueError: too many values to unpack (expected 3)
Same problem here:
Pixel Perfect Mode Enabled.
resize_mode = ResizeMode.INNER_FIT
raw_H = 1200
raw_W = 800
target_H = 1200
target_W = 800
estimation = 800.0
preprocessor resolution = 800
Loading model from cache: control_v11p_sd15_openpose [cab727d4]
Loading preprocessor: openpose_full
Pixel Perfect Mode Enabled.
resize_mode = ResizeMode.INNER_FIT
raw_H = 1248
raw_W = 976
target_H = 1200
target_W = 800
estimation = 938.4615384615385
preprocessor resolution = 938
0%| | 0/30 [00:00<?, ?it/s]ControlNet used torch.float16 VAE to encode torch.Size([1, 4, 150, 100]).
0%| | 0/30 [00:00<?, ?it/s]
Error completing request
Arguments: ('task(pw0n1jxftrzv5y3)', '(((masterpiece,best quality))),cyberpunk clothes,(1girl),pink eyes,3D,1girl,long hair,small breasts, hoodie, mini skirt, pink hair,upper body,side-tie,tight,outdoors,midriff,(looking at viewer, smile),Potrait,anime skiny girl with headphones,digital cyberpunk anime art, cyberpunk anime girl, digital cyberpunk anime art,cyberpunk city background, nightcore, anime moe artstyle, anime girl of the future, tech shoes,ultra-detailed, absurdres, solo, volumetric lighting, best quality, intricate details, sharp focus, hyper detailed
Seems like I solved it, by generating a random image with this option enabled and both the preprocessor and model of openpose selected It downloaded something from lllyasviel github and the error didn't happen again
@Semanual I tried the same and my problem was also resolved, thank you very much!
I ran into this too, generating w/ openpose did indeed download some new packages but the problem persists.
when updating to latest version (1.1.197) do not forget to restart terminal
When i tried to use controlnet after the last update in img2img it doesn't work at all and always give me this error "too many values to unpack (expected 3)"