Haoming02 / sd-forge-ic-light

An Extension for Forge Webui that implements IC-Light
Apache License 2.0
24 stars 1 forks source link

Support for reForge webui dev_upstream branch #5

Closed Panchovix closed 3 months ago

Panchovix commented 3 months ago

Hi there, thanks for your all hard work!

I did fork and do updates up to A1111/Comfy upstream here https://github.com/Panchovix/stable-diffusion-webui-reForge/tree/dev_upstream

Since I changed the code to upstream comfy for ldm_patched\ldm\modules\diffusionmodules\openaimodel.py and conv2d on ldm_patched/modules/ops.py, now there is some issues.

Error is

Downloading data from 'https://github.com/danielgatis/rembg/releases/download/v0.0.0/u2net_human_seg.onnx' to file 'G:\Stable difussion\stable-diffusion-webui\models\u2net\u2net_human_seg.onnx'.
100%|###############################################| 176M/176M [00:00<?, ?B/s]
To load target model AutoencoderKL
Begin to load 1 model
Moving model(s) has taken 0.03 seconds
To load target model AutoencoderKL
Begin to load 1 model
Reuse 1 loaded models
Moving model(s) has taken 0.00 seconds
To load target model BaseModel
Begin to load 1 model
WARNING:root:WARNING SHAPE MISMATCH diffusion_model.input_blocks.0.0.weight WEIGHT NOT MERGED torch.Size([320, 12, 3, 3]) != torch.Size([320, 4, 3, 3])
Moving model(s) has taken 0.13 seconds
  0%|                                                                                           | 0/25 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "G:\Stable difussion\stable-diffusion-webui-reForge\modules_forge\main_thread.py", line 37, in loop
    task.work()
  File "G:\Stable difussion\stable-diffusion-webui-reForge\modules_forge\main_thread.py", line 26, in work
    self.result = self.func(*self.args, **self.kwargs)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\modules\txt2img.py", line 110, in txt2img_function
    processed = processing.process_images(p)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\modules\processing.py", line 815, in process_images
    res = process_images_inner(p)
          ^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\modules\processing.py", line 988, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\modules\processing.py", line 1362, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\modules\sd_samplers_kdiffusion.py", line 236, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\modules\sd_samplers_common.py", line 274, in launch_sampling
    return func()
           ^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\modules\sd_samplers_kdiffusion.py", line 236, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
                                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui\venv\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
    denoised = model(x, sigmas[i] * s_in, **extra_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui\venv\Lib\site-packages\torch\nn\modules\module.py", line 1714, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui\venv\Lib\site-packages\torch\nn\modules\module.py", line 1725, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\modules\sd_samplers_cfg_denoiser.py", line 369, in forward
    denoised = sampling_function(model, x, sigma, uncond_patched, cond_patched, cond_scale, model_options, seed)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\ldm_patched\modules\samplers.py", line 290, in sampling_function
    cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\ldm_patched\modules\samplers.py", line 257, in calc_cond_uncond_batch
    output = model_options['model_function_wrapper'](model.apply_model, {"input": input_x, "timestep": timestep_, "c": c, "cond_or_uncond": cond_or_uncond}).chunk(batch_chunks)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\extensions\sd-forge-ic-light\libiclight\ic_light_nodes.py", line 61, in wrapper_func
    return existing_wrapper(unet_apply, params=apply_c_concat(params))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\extensions\sd-forge-ic-light\libiclight\ic_light_nodes.py", line 53, in unet_dummy_apply
    return unet_apply(x=params["input"], t=params["timestep"], **params["c"])
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\ldm_patched\modules\model_base.py", line 118, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui\venv\Lib\site-packages\torch\nn\modules\module.py", line 1714, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui\venv\Lib\site-packages\torch\nn\modules\module.py", line 1725, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 859, in forward
    h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 57, in forward_timestep_embed
    x = layer(x)
        ^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui\venv\Lib\site-packages\torch\nn\modules\module.py", line 1714, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui\venv\Lib\site-packages\torch\nn\modules\module.py", line 1725, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\ldm_patched\modules\ops.py", line 137, in forward
    return super().forward(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui\venv\Lib\site-packages\torch\nn\modules\conv.py", line 549, in forward
    return self._conv_forward(input, self.weight, self.bias)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui\venv\Lib\site-packages\torch\nn\modules\conv.py", line 544, in _conv_forward
    return F.conv2d(
           ^^^^^^^^^
RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 12, 136, 112] to have 4 channels, but got 12 channels instead
Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 12, 136, 112] to have 4 channels, but got 12 channels instead

For VAE it does get it from def clone(self) on https://github.com/Panchovix/stable-diffusion-webui-reForge/blob/99cb084fbdac6e0084052b82bbe31140867b9213/ldm_patched/modules/sd.py#L293

Haoming02 commented 3 months ago

Is it specifically the dev_upstream branch?

Panchovix commented 3 months ago

Yes, should be only on dev_upstream branch, since on ldm_patched.modules.ops, conv2D implementation is like on comfy upstream, which differs vs OG forge/main branch.

I added some new parameters into the VAE clone, else it would fail about missing some parameters (which ldm_patched.modules.sd added with updates)

Panchovix commented 3 months ago

I updated conv2d a little again to comfy upstream, error persists but at least now is just 8 channels instead of 12

Traceback (most recent call last):
  File "G:\Stable difussion\stable-diffusion-webui-reForge\modules_forge\main_thread.py", line 37, in loop
    task.work()
  File "G:\Stable difussion\stable-diffusion-webui-reForge\modules_forge\main_thread.py", line 26, in work
    self.result = self.func(*self.args, **self.kwargs)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\modules\txt2img.py", line 110, in txt2img_function
    processed = processing.process_images(p)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\modules\processing.py", line 820, in process_images
    res = process_images_inner(p)
          ^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\modules\processing.py", line 969, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\modules\processing.py", line 1340, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\modules\sd_samplers_kdiffusion.py", line 261, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\modules\sd_samplers_common.py", line 274, in launch_sampling
    return func()
           ^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\modules\sd_samplers_kdiffusion.py", line 261, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
                                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui\venv\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
    denoised = model(x, sigmas[i] * s_in, **extra_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui\venv\Lib\site-packages\torch\nn\modules\module.py", line 1716, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui\venv\Lib\site-packages\torch\nn\modules\module.py", line 1727, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\modules\sd_samplers_cfg_denoiser.py", line 797, in forward
    denoised = sampling_function(model, x, sigma, uncond_patched, cond_patched, cond_scale, model_options, seed)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\ldm_patched\modules\samplers.py", line 280, in sampling_function
    cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\ldm_patched\modules\samplers.py", line 259, in calc_cond_uncond_batch
    return tuple(calc_cond_batch(model, [cond, uncond], x_in, timestep, model_options))
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\ldm_patched\modules\samplers.py", line 232, in calc_cond_batch
    output = model_options['model_function_wrapper'](model.apply_model, {"input": input_x, "timestep": timestep_, "c": c, "cond_or_uncond": cond_or_uncond}).chunk(batch_chunks)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\extensions\sd-forge-ic-light\libiclight\ic_light_nodes.py", line 61, in wrapper_func
    return existing_wrapper(unet_apply, params=apply_c_concat(params))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\extensions\sd-forge-ic-light\libiclight\ic_light_nodes.py", line 53, in unet_dummy_apply
    return unet_apply(x=params["input"], t=params["timestep"], **params["c"])
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\ldm_patched\modules\model_base.py", line 118, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui\venv\Lib\site-packages\torch\nn\modules\module.py", line 1716, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui\venv\Lib\site-packages\torch\nn\modules\module.py", line 1727, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 861, in forward
    h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 57, in forward_timestep_embed
    x = layer(x)
        ^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui\venv\Lib\site-packages\torch\nn\modules\module.py", line 1716, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui\venv\Lib\site-packages\torch\nn\modules\module.py", line 1727, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui-reForge\ldm_patched\modules\ops.py", line 145, in forward
    return super().forward(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui\venv\Lib\site-packages\torch\nn\modules\conv.py", line 549, in forward
    return self._conv_forward(input, self.weight, self.bias)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\Stable difussion\stable-diffusion-webui\venv\Lib\site-packages\torch\nn\modules\conv.py", line 544, in _conv_forward
    return F.conv2d(
           ^^^^^^^^^
RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 136, 112] to have 4 channels, but got 8 channels instead
dan4ik94 commented 3 months ago

what are those channels and weights actually, ic light tries to encode/decode the image somehow, I'm not familiar.

Haoming02 commented 3 months ago

I went ahead to check huchenlei's implementation for ComfyUI, and saw that people also have the same issues over there.

But apparently, according to this comment, installing LayerDiffuse fixes the issue.

So, I ported the patching from that Extension over here as well. And now, it seems to work properly without any errors.

Haoming02 commented 3 months ago

Though, feels like the effect is somehow stronger...?

pinea00 commented 3 months ago

Why do I get the same error on reforge (the code has been updated), but it works perfectly on Forge? I do'nt install LayerDiffuse for forge or reforge

WARNING:root:WARNING SHAPE MISMATCH diffusion_model.input_blocks.0.0.weight WEIGHT NOT MERGED torch.Size([320, 8, 3, 3]) != torch.Size([320, 4, 3, 3]) Moving model(s) has taken 0.62 seconds 0%| | 0/8 [00:00<?, ?it/s] Traceback (most recent call last): File "S:\reForge\modules_forge\main_thread.py", line 37, in loop task.work() File "S:\reForge\modules_forge\main_thread.py", line 26, in work self.result = self.func(*self.args, self.kwargs) File "S:\reForge\modules\img2img.py", line 235, in img2img_function processed = process_images(p) File "S:\reForge\modules\processing.py", line 856, in process_images res = process_images_inner(p) File "S:\reForge\modules\processing.py", line 1007, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "S:\reForge\modules\processing.py", line 1828, in sample samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning) File "S:\reForge\modules\sd_samplers_kdiffusion.py", line 207, in sample_img2img samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "S:\reForge\modules\sd_samplers_common.py", line 274, in launch_sampling return func() File "S:\reForge\modules\sd_samplers_kdiffusion.py", line 207, in samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "S:\venvzluda\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "S:\reForge\repositories\k-diffusion\k_diffusion\sampling.py", line 626, in sample_dpmpp_2m_sde denoised = model(x, sigmas[i] s_in, extra_args) File "S:\venvzluda\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "S:\venvzluda\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, kwargs) File "S:\reForge\modules\sd_samplers_cfg_denoiser.py", line 225, in forward denoised = sampling_function(model, x, sigma, uncond_patched, cond_patched, cond_scale, model_options, seed) File "S:\reForge\ldm_patched\modules\samplers.py", line 299, in sampling_function cond_pred, uncond_pred = calc_cond_uncondbatch(model, cond, uncond, x, timestep, model_options) File "S:\reForge\ldm_patched\modules\samplers.py", line 260, in calc_cond_uncond_batch output = model_options['model_function_wrapper'](model.apply_model, {"input": inputx, "timestep": timestep, "c": c, "cond_or_uncond": cond_or_uncond}).chunk(batch_chunks) File "S:\stable-diffusion-webui\extensions\sd-forge-ic-light\lib_iclight\ic_light_nodes.py", line 61, in wrapper_func return existing_wrapper(unet_apply, params=apply_c_concat(params)) File "S:\stable-diffusion-webui\extensions\sd-forge-ic-light\lib_iclight\ic_light_nodes.py", line 53, in unet_dummy_apply return unet_apply(x=params["input"], t=params["timestep"], params["c"]) File "S:\reForge\ldm_patched\modules\model_base.py", line 90, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, *extra_conds).float() File "S:\venvzluda\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(args, kwargs) File "S:\venvzluda\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, kwargs) File "S:\reForge\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 886, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) File "S:\reForge\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 61, in forward_timestep_embed x = layer(x) File "S:\venvzluda\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "S:\venvzluda\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(args, kwargs) File "S:\reForge\ldm_patched\modules\ops.py", line 114, in forward return super().forward(args, kwargs) File "S:\venvzluda\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward return self._conv_forward(input, self.weight, self.bias) File "S:\venvzluda\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 96, 96] to have 4 channels, but got 8 channels instead Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 96, 96] to have 4 channels, but got 8 channels instead Error completing request Arguments: ('task(wfpegk5t343l3v7)', <gradio.routes.Request object at 0x000001BC852262F0>, 0, '1 girl, bridal dress, black hair, at the seaside, sunset', '', [], <PIL.Image.Image image mode=RGBA size=768x768 at 0x1BC861DB220>, None, None, None, None, None, None, 4, 0, 1, 1, 1, 2, 1.5, 1, 0.0, 768, 768, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', 0, False, 1, 0.5, 4, 0, 0.5, 2, 8, 'DPM++ 2M SDE', 'Turbo', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, False, False, False, 'base', 'None', 3, 5, 2, 0, 0.1, 3, 5, False, 0, <scripts.animatediff_ui.AnimateDiffProcess object at 0x000001BC7FC19D20>, True, 'iclight_sd15_fc_unet_ldm', 'Right Light', 'Use Background Image', array([[[ 0, 0, 0, 255], [ 0, 0, 0, 255], [ 0, 0, 0, 255], ..., [254, 254, 254, 255], [254, 254, 254, 255], *** [255, 255, 255, 255]],


[[ 0, 0, 0, 255], [ 0, 0, 0, 255], [ 0, 0, 0, 255], ..., [254, 254, 254, 255], [254, 254, 254, 255], *** [255, 255, 255, 255]],


[[ 0, 0, 0, 255], [ 0, 0, 0, 255], [ 0, 0, 0, 255], ..., [254, 254, 254, 255], [254, 254, 254, 255], *** [255, 255, 255, 255]],


*** ...,


[[ 0, 0, 0, 255], [ 0, 0, 0, 255], [ 0, 0, 0, 255], ..., [254, 254, 254, 255], [254, 254, 254, 255], *** [255, 255, 255, 255]],


[[ 0, 0, 0, 255], [ 0, 0, 0, 255], [ 0, 0, 0, 255], ..., [254, 254, 254, 255], [254, 254, 254, 255], *** [255, 255, 255, 255]],


[[ 0, 0, 0, 255], [ 0, 0, 0, 255], [ 0, 0, 0, 255], ..., [254, 254, 254, 255], [254, 254, 254, 255], * [255, 255, 255, 255]]], dtype=uint8), None, True, 'u2net_human_seg', 225, 16, 16, False, False, 3, False, 0, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], multi_inputs_gallery=[], generated_image=None, mask_image=None, hr_option=<HiResFixOption.BOTH: 'Both'>, enabled=False, module='None', model='None', weight=1, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, advanced_weighting=None, save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], multi_inputs_gallery=[], generated_image=None, mask_image=None, hr_option=<HiResFixOption.BOTH: 'Both'>, enabled=False, module='None', model='None', weight=1, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, advanced_weighting=None, save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], multi_inputs_gallery=[], generated_image=None, mask_image=None, hr_option=<HiResFixOption.BOTH: 'Both'>, enabled=False, module='None', model='None', weight=1, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, advanced_weighting=None, save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, ' CFG Scale should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, '

Will upscale the image depending on the selected target size type

', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {} Traceback (most recent call last): File "S:\reForge\modules\call_queue.py", line 74, in f res = list(func(
args,
kwargs)) TypeError: 'NoneType' object is not iterable


Panchovix commented 3 months ago

Many thanks for the update, working fine on my end!