pkuliyi2015 / multidiffusion-upscaler-for-automatic1111

Tiled Diffusion and VAE optimize, licensed under CC BY-NC-SA 4.0
Other
4.59k stars 330 forks source link

Error when Clip_Vision (t2i_adaptater_style_sd14v1) is ​​enabled #66

Open SeBL4RD opened 1 year ago

SeBL4RD commented 1 year ago

Error when ControlNet Clip_Vision Preprocessing (t2i_adaptater_style_sd14v1) is ​​enabled. Generation just crash.


Error completing request   0%|                                                                  | 0/50 [00:00<?, ?it/s]
Arguments: ('task(oahsho0xyo6j3la)', '<lora:SeBL4RD:1> driving a lamborghini countach, majestic, trending on ArtStation, trending on CGSociety, Intricate, High Detail, Sharp focus, serenity, photorealistic photography art by midjourney and greg rutkowski ', '', [], 50, 0, False, False, 1, 1, 13, 2286490026.0, -1.0, 0, 0, 0, False, 512, 910, False, 0.2, 2, 'ESRGAN 4x', 50, 0, 0, ['Model hash: c35782bad8'], 0, True, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, True, True, True, True, 0, 1200, 128, False, '', 0, False, False, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, None, 'Refresh models', <scripts.external_code.ControlNetUnit object at 0x00000271F5C136D0>, <scripts.external_code.ControlNetUnit object at 0x00000271F5A1F550>, <scripts.external_code.ControlNetUnit object at 0x00000271F5B91BD0>, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 20, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0, None, False, None, False, None, False, 50) {}
Traceback (most recent call last):
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\processing.py", line 486, in process_images
    res = process_images_inner(p)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\processing.py", line 636, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\processing.py", line 852, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 351, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 227, in launch_sampling
    return func()
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 351, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 119, in forward
    x_out = self.inner_model(x_in, sigma_in, cond={"c_crossattn": [cond_in], "c_concat": [image_cond_in]})
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\methods\multidiffusion.py", line 82, in kdiff_repeat
    return self.compute_x_tile(x_in, org_func, repeat_func, custom_func)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\methods\multidiffusion.py", line 127, in compute_x_tile
    self.init(x_in)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\methods\abstractdiffusion.py", line 153, in init
    self.prepare_control_tensors()
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\methods\abstractdiffusion.py", line 479, in prepare_control_tensors
    control_tile = control_tensor[:, :, bbox[1]*8:bbox[3]*8, bbox[0]*8:bbox[2]*8]
IndexError: too many indices for tensor of dimension 2

Loading model from cache: control_sd15_depth [fef5e48e]
pkuliyi2015 commented 1 year ago

t2i seems to be different from the most popular ControlNet. I never used that, does it have any significant advantages? If not I won't spend time to fix this.

SeBL4RD commented 1 year ago

Yes, it's very useful, it gives an inspiration simply based on an image, without prompting an image generation. It can be mixed with a Depth or other, and precisely mixed with other layers and prompt, it has an incredible impact. I'd understand if you didn't want to waste time with that. A little example : -A prompt with a BMW in a city -A autumn forest as inspiration

07136-3117485826-A BMW M3 e36, realistic, in a city, photography Sans-adazdtitre-1

07142-3117485826-A BMW M3 e36, realistic, in a city, photography

SeBL4RD commented 1 year ago

I just did another test, Clip_Vision and t2i_style work if the image is 512x512 (With MultiDIffusion), but if I go above with 768x768, I get the error mentioned above. Even if I greatly reduce the decoder tile size. Without MultiDiffusion, I can generate at these resolutions with these ControlNet layers.

pkuliyi2015 commented 1 year ago

Well I have just updated the readme, and currently start to investigate this. But first, I need to play with the t2i, which seems to be interesting. Would you please give me a link to that model?

SeBL4RD commented 1 year ago

https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_style_sd14v1.pth

Use Clip_Vision as "Preprocessor" and this model.

pkuliyi2015 commented 1 year ago

Thank you. Let me play with it first :)

SeBL4RD commented 1 year ago

Sorry to come back to this, as we have a conversation here. It seems to have the same problem as last night with OpenPose, I have this crash again, while I use the same settings as last night, it's a bit off topic to post this here but here is the error when I activate OpenPose:

Arguments: ('task(wumsce6arqxcw4u)', 'Jacques Chirac, <lora:ChiracLora:1>, in a superman suit, flying on top of a city, majestic, trending on ArtStation, trending on CGSociety, Intricate, High Detail, Sharp focus, serenity, photorealistic photography art by midjourney and greg rutkowski\n', 'cartoon, 3d, ((disfigured)), ((bad art)), ((deformed)),((extra limbs)),((close up)),((b&w)), wierd colors, blurry, (((duplicate))), ((morbid)), ((mutilated)), out of frame, extra fingers, mutated hands, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck))), Photoshop, video game, ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, mutation, mutated, extra limbs, extra legs, extra arms, disfigured, deformed, cross-eye, body out of frame, blur, bad art, bad anatomy, 3d render, freckles, watermark, text, url, hat, sunglasses, glasses', [], 50, 0, False, False, 1, 1, 13, 1259012780.0, -1.0, 0, 0, 0, False, 512, 910, True, 0.2, 2.35, 'ESRGAN 4x', 50, 0, 0, ['Model hash: c35782bad8'], 0, True, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, True, True, True, True, 0, 1536, 112, False, '', 0, False, False, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, None, 'Refresh models', <scripts.external_code.ControlNetUnit object at 0x00000206764D7010>, <scripts.external_code.ControlNetUnit object at 0x000002067525FFD0>, <scripts.external_code.ControlNetUnit object at 0x000002067525FA60>, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 20, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0, None, False, None, False, None, False, 50) {}
Traceback (most recent call last):
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\processing.py", line 486, in process_images
    res = process_images_inner(p)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\processing.py", line 636, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\processing.py", line 924, in sample
    samples = self.sampler.sample_img2img(self, samples, noise, conditioning, unconditional_conditioning, steps=self.hr_second_pass_steps or self.steps, image_conditioning=image_conditioning)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 324, in sample_img2img
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 227, in launch_sampling
    return func()
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 324, in <lambda>
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 138, in forward
    x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond={"c_crossattn": c_crossattn, "c_concat": [image_cond_in[a:b]]})
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\utils\utils.py", line 179, in wrapper
    return fn(*args, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\methods\multidiffusion.py", line 104, in kdiff_forward
    return self.sample_one_step(x_in, org_func, repeat_func, custom_func)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\methods\multidiffusion.py", line 157, in sample_one_step
    return org_func(x_in)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\methods\multidiffusion.py", line 90, in org_func
    return self.sampler_forward(x, sigma_in, cond=cond)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1212, in _call_impl
    result = forward_call(*input, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1329, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 233, in forward2
    return forward(*args, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 176, in forward
    control = param.control_model(x=x_in, hint=param.hint_cond, timesteps=timesteps, context=context)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 115, in forward
    return self.control_model(*args, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 380, in forward
    h += guided_hint
RuntimeError: output with shape [1, 320, 150, 267] doesn't match the broadcast shape [2, 320, 150, 267]
SeBL4RD commented 1 year ago

Generation works to 50%, and when HiresFix start, Crash with this message, yesterday night, was generating in 2100x1200 with Openpose, was working well : 00206

pkuliyi2015 commented 1 year ago

I know what happened. I will fix it in 10 min.

pkuliyi2015 commented 1 year ago

Please have a try now.

SeBL4RD commented 1 year ago
Error completing request
Arguments: ('task(p28p0shv7xq1f6l)', 'Jacques Chirac, <lora:ChiracLora:1>, in a superman suit, flying on top of a city, majestic, trending on ArtStation, trending on CGSociety, Intricate, High Detail, Sharp focus, serenity, photorealistic photography art by midjourney and greg rutkowski\n', 'cartoon, 3d, ((disfigured)), ((bad art)), ((deformed)),((extra limbs)),((close up)),((b&w)), wierd colors, blurry, (((duplicate))), ((morbid)), ((mutilated)), out of frame, extra fingers, mutated hands, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck))), Photoshop, video game, ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, mutation, mutated, extra limbs, extra legs, extra arms, disfigured, deformed, cross-eye, body out of frame, blur, bad art, bad anatomy, 3d render, freckles, watermark, text, url, hat, sunglasses, glasses', [], 50, 0, False, False, 1, 1, 13, 1259012780.0, -1.0, 0, 0, 0, False, 512, 910, True, 0.2, 2.35, 'ESRGAN 4x', 50, 0, 0, ['Model hash: c35782bad8'], 0, True, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, True, True, True, True, 0, 1536, 160, False, '', 0, False, False, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, None, 'Refresh models', <scripts.external_code.ControlNetUnit object at 0x00000254770BD420>, <scripts.external_code.ControlNetUnit object at 0x0000025476E96230>, <scripts.external_code.ControlNetUnit object at 0x0000025476E955D0>, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 20, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0, None, False, None, False, None, False, 50) {}
Traceback (most recent call last):
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\processing.py", line 486, in process_images
    res = process_images_inner(p)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\processing.py", line 636, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\processing.py", line 924, in sample
    samples = self.sampler.sample_img2img(self, samples, noise, conditioning, unconditional_conditioning, steps=self.hr_second_pass_steps or self.steps, image_conditioning=image_conditioning)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 324, in sample_img2img
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 227, in launch_sampling
    return func()
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 324, in <lambda>
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 138, in forward
    x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond={"c_crossattn": c_crossattn, "c_concat": [image_cond_in[a:b]]})
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\utils\utils.py", line 179, in wrapper
    return fn(*args, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\methods\multidiffusion.py", line 104, in kdiff_forward
    return self.sample_one_step(x_in, org_func, repeat_func, custom_func)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\methods\multidiffusion.py", line 157, in sample_one_step
    self.reset_controlnet_tensors()
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\methods\multidiffusion.py", line 90, in org_func
    return self.sampler_forward(x, sigma_in, cond=cond)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1212, in _call_impl
    result = forward_call(*input, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1329, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 233, in forward2
    return forward(*args, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 176, in forward
    control = param.control_model(x=x_in, hint=param.hint_cond, timesteps=timesteps, context=context)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 115, in forward
    return self.control_model(*args, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Seb\Desktop\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 380, in forward
    h += guided_hint
RuntimeError: output with shape [1, 320, 150, 267] doesn't match the broadcast shape [2, 320, 150, 267]
SeBL4RD commented 1 year ago

Forget my previous comment, its working. Restarted SD after extension updates solved the problem

SeBL4RD commented 1 year ago

No crash at the moment, 60%

SeBL4RD commented 1 year ago

OpenPose works back in Hires.Fix, thank you :) 00207

pkuliyi2015 commented 1 year ago

OK now?

pkuliyi2015 commented 1 year ago

Well, fantastic work.

SeBL4RD commented 1 year ago

Well, fantastic work.

Its supposed to be "Super Menteur", A humorous character by Jacques Chirac, in the 90s-00s

SeBL4RD commented 1 year ago

So? you were able to do what you wanted with t2i_Style?

pkuliyi2015 commented 1 year ago

I now have no time to continue. Will do it in spare time.

SeBL4RD commented 1 year ago

Ok :)

SeBL4RD commented 1 year ago

If you have discord, it may be easier to report to you: SeBL4RD#6090

SeBL4RD commented 1 year ago

Did a rollback to b1bc3339c1c6c5447f53d34c0413417215ca4d81

It was broken again with OpenPose