Panchovix / stable-diffusion-webui-reForge

GNU Affero General Public License v3.0
380 stars 19 forks source link

[Bug]: ControlNet does nothing #55

Closed Yumeo0 closed 3 months ago

Yumeo0 commented 3 months ago

Checklist

What happened?

Idk what happened, but ControlNet doesn't seem to do anything anymore? Even with full strength the ControlNet model doesn't influence the pose at all. Screenshot 2024-07-22 214844

Steps to reproduce the problem

  1. Select ControlNet model of choice
  2. Try to generate an image

What should have happened?

ControlNet influences the outcome of the image.

What browsers do you use to access the UI ?

Brave

Sysinfo

sysinfo-2024-07-22-20-15.json

Console logs

https://pastebin.com/Bt69MsjT

Additional information

I updated the WebUI today so maybe it was a change that broke it?

Panchovix commented 3 months ago

I did some updates for some compatibility with newer controlnet on dev, but it may cause that issue.

If you try an older commit git checkout e1e6865 does it works as expected? If yes, to revert the changes.

You can return to latest commit with git checkout dev_upstream

The changes are pretty minor, so not sure if it would affect it.

Panchovix commented 3 months ago

Wait, checking the log, it said this:

*** Error running process: C:\AI\stable-diffusion-webui-reForge\extensions-builtin\sd_forge_controlnet\scripts\controlnet.py
    Traceback (most recent call last):
      File "C:\AI\stable-diffusion-webui-reForge\modules\scripts.py", line 845, in process
        script.process(p, *script_args)
      File "C:\AI\stable-diffusion-webui-reForge\venv\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^
      File "C:\AI\stable-diffusion-webui-reForge\extensions-builtin\sd_forge_controlnet\scripts\controlnet.py", line 573, in process
        self.process_unit_after_click_generate(p, unit, params, *args, **kwargs)
      File "C:\AI\stable-diffusion-webui-reForge\venv\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^
      File "C:\AI\stable-diffusion-webui-reForge\extensions-builtin\sd_forge_controlnet\scripts\controlnet.py", line 419, in process_unit_after_click_generate
        assert unit.model != 'None', 'You have not selected any control model!'
               ^^^^^^^^^^^^^^^^^^^^
    AssertionError: You have not selected any control model!

OpenPose comes from another extension without the need of a model? (Sorry, I'm really novice with controlnet)

Yumeo0 commented 3 months ago

image

Panchovix commented 3 months ago

Sorry for my novice question, but does open pose needs a model or it works just like that?

Yumeo0 commented 3 months ago

Are you talking about this? image

Panchovix commented 3 months ago

Yes! I haven't used openpose before, so I'm not sure how it works.

Yumeo0 commented 3 months ago

This isn't specifically for OpenPose, but for ControlNet in general. If you input an image you can preprocess it using a model like for example OpenPose. I already provided a pose as an image so I don't have to preprocess it anymore that's why I leave it as "None".

Panchovix commented 3 months ago

Oh I see. If you select openpose on preprocessor and none in model, it gets the same issue? Also, did this work before today commits?

Yumeo0 commented 3 months ago

It did work before I updated it today. At least when i used it I think 2 days ago?

If I put OpenPose on preprocessor and "None" in model I get a blank ControlNet image (It still generates the output image but without being influenced once again) image

Yumeo0 commented 3 months ago

I will try using the older commit you wanted me to checkout.

Yumeo0 commented 3 months ago

I get the same error

*** Error running postprocess_batch_list: C:\AI\stable-diffusion-webui-reForge\extensions-builtin\sd_forge_controlnet\scripts\controlnet.py
    Traceback (most recent call last):
      File "C:\AI\stable-diffusion-webui-reForge\modules\scripts.py", line 909, in postprocess_batch_list
        script.postprocess_batch_list(p, pp, *script_args, **kwargs)
      File "C:\AI\stable-diffusion-webui-reForge\venv\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^
      File "C:\AI\stable-diffusion-webui-reForge\extensions-builtin\sd_forge_controlnet\scripts\controlnet.py", line 586, in postprocess_batch_list
        self.process_unit_after_every_sampling(p, unit, self.current_params[i], pp, *args, **kwargs)
                                                        ~~~~~~~~~~~~~~~~~~~^^^
    KeyError: 0

---
Panchovix commented 3 months ago

Okay, can you go back to the commit that worked for you yesterday? You can go to https://github.com/Panchovix/stable-diffusion-webui-reForge/commits/dev_upstream, search the commit, press it and inside, below "browse files" at the right, there is a "commit"

imagen

do "git checkout hash" replacing hash with the value there, and let me know if it works as expected. Also please let me know what commit/hash was the working one.

Yumeo0 commented 3 months ago

I am utterly confused. I can't get it to work at all anymore. Everything I try with preprocessor on throws this error: https://pastebin.com/ZyzwLuFA

Doesn't even generate an image with preprocessor turned on...

Panchovix commented 3 months ago

That is pretty weird, if it worked before. Can you try main branch and see if it works there? If not, then remaining test would be stock forge (aka using this commit bfee03d, with git checkout bfee03d)

Yumeo0 commented 3 months ago

I tried it on my separate forge install and I get the same error there too?

Panchovix commented 3 months ago

Hmm if it on stock forge happens, then it would probably happen here as well. Pretty weird by what you explain about it though. If you have a separate A1111 install, does it work there? (A1111 uses the updated controlnet extension though, so it differs, but it's to test)

Yumeo0 commented 3 months ago

Alright I don't know what I did but I am back where I started with reForge. Same error as I first had. On A1111 it works fine.

Panchovix commented 3 months ago

Okay, thanks. I think it isn't a bug with some updates but a bug at how Forge does openpose, if I'm not wrong.

Yumeo0 commented 3 months ago

It's not just OpenPose though. Canny, Depth and all the other ControlNet options are broken like that too. It just won't work at all. image

Moving model(s) has taken 1.67 seconds
  0%|                                                                                           | 0/24 [00:00<?, ?it/s]start LLLite: step 0
  0%|                                                                                           | 0/24 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "C:\AI\stable-diffusion-webui-reForge\modules_forge\main_thread.py", line 37, in loop
    task.work()
  File "C:\AI\stable-diffusion-webui-reForge\modules_forge\main_thread.py", line 26, in work
    self.result = self.func(*self.args, **self.kwargs)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\modules\txt2img.py", line 110, in txt2img_function
    processed = processing.process_images(p)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\modules\processing.py", line 815, in process_images
    res = process_images_inner(p)
          ^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\modules\processing.py", line 988, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\modules\processing.py", line 1362, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\modules\sd_samplers_kdiffusion.py", line 261, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\modules\sd_samplers_common.py", line 274, in launch_sampling
    return func()
           ^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\modules\sd_samplers_kdiffusion.py", line 261, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
                                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\venv\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\venv\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\venv\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\modules\sd_samplers_cfg_denoiser.py", line 369, in forward
    denoised = sampling_function(model, x, sigma, uncond_patched, cond_patched, cond_scale, model_options, seed)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\ldm_patched\modules\samplers.py", line 285, in sampling_function
    out = calc_cond_batch(model, conds, x, timestep, model_options)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\ldm_patched\modules\samplers.py", line 234, in calc_cond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\ldm_patched\modules\model_base.py", line 118, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\venv\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\venv\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 859, in forward
    h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 51, in forward_timestep_embed
    x = layer(x, context, transformer_options)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\venv\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\venv\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\ldm_patched\ldm\modules\attention.py", line 725, in forward
    x = block(x, context=context[i], transformer_options=transformer_options)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\venv\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\venv\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\ldm_patched\ldm\modules\attention.py", line 591, in forward
    n, context_attn1, value_attn1 = p(n, context_attn1, value_attn1, extra_options)
                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\extensions-builtin\sd_forge_controlllite\lib_controllllite\lib_controllllite.py", line 102, in __call__
    q = q + self.modules[module_pfx_to_q](q)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\venv\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\venv\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\stable-diffusion-webui-reForge\extensions-builtin\sd_forge_controlllite\lib_controllllite\lib_controllllite.py", line 234, in forward
    cx = torch.cat([cx, self.down(x)], dim=1 if self.is_conv2d else 2)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Sizes of tensors must match except in dimension 2. Expected size 384 but got size 400 for tensor number 1 in the list.
Sizes of tensors must match except in dimension 2. Expected size 384 but got size 400 for tensor number 1 in the list.
*** Error completing request
*** Arguments: ('task(ssxqke6pha5hey2)', <gradio.routes.Request object at 0x000001FFED650A50>, 'score_9,score_8_up,score_7_up,source_anime,<lora:D_Jirooo_Artist_Style_PonyXL:0.8>,1girl,<lora:ganyu-ponyxl-lora-nochekaiser:0.75>,ganyu,blue hair,goat horns,horns,long hair,purple eyes,sidelocks,gloves,bare shoulders,detached sleeves,black gloves,bell,bodysuit,neck bell,black bodysuit,vision \\(genshin impact\\),bodystocking,cowbell,chinese knot,full body,', 'text,speech bubble,monochrome,greyscale,', [], 1, 1, 7, 784, 512, False, 0.7, 2, 'R-ESRGAN 4x+ Anime6B', 24, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 24, 'Euler a', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], multi_inputs_gallery=[], generated_image=None, mask_image=None, hr_option=<HiResFixOption.BOTH: 'Both'>, enabled=True, module='canny', model='kohya_controllllite_xl_canny_anime [7158f7e0]', weight=1, image={'image': array([[[204,  99, 106],
***         [0, 0, 0]]], dtype=uint8)}, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, processor_res=512, threshold_a=100, threshold_b=200, guidance_start=0, guidance_end=1, pixel_perfect=True, control_mode=<ControlMode.CONTROL: 'ControlNet is more important'>, advanced_weighting=None, save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], multi_inputs_gallery=[], generated_image=None, mask_image=None, hr_option=<HiResFixOption.BOTH: 'Both'>, enabled=False, module='None', model='None', weight=1, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, advanced_weighting=None, save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], multi_inputs_gallery=[], generated_image=None, mask_image=None, hr_option=<HiResFixOption.BOTH: 'Both'>, enabled=False, module='None', model='None', weight=1, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, advanced_weighting=None, save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "C:\AI\stable-diffusion-webui-reForge\modules\call_queue.py", line 74, in f
        res = list(func(*args, **kwargs))
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    TypeError: 'NoneType' object is not iterable
Yumeo0 commented 3 months ago

I also found out that the Preprocessor and Model are separate. You are supposed to always select a model (makes sense I was just stupid), but you can leave the Preprocessor as "None". It doesn't work in reForge/forge either way though.

Panchovix commented 3 months ago

I just downloaded and tested a model and it seems to work? It is done like this?

imagen

2024-07-22 17:38:54,965 - ControlNet - INFO - ControlNet Input Mode: InputMode.SIMPLE25/25 [00:08<00:00,  5.84it/s]
2024-07-22 17:38:54,998 - ControlNet - INFO - Using preprocessor: canny
2024-07-22 17:38:54,999 - ControlNet - INFO - preprocessor resolution = 512
2024-07-22 17:38:55,628 - ControlNet - INFO - Current ControlNet ControlNetPatcher: G:\Stable difussion\stable-diffusion-webui-reForge\models\ControlNet\canny_sdxl.safetensors
To load target model SDXLClipModel
Begin to load 1 model
Moving model(s) has taken 0.01 seconds
2024-07-22 17:38:56,394 - ControlNet - INFO - ControlNet Method canny patched.
2024-07-22 17:38:56,398 - ControlNet - INFO - ControlNet Method canny patched.
To load target model SDXL
To load target model ControlNet
Begin to load 2 models
Moving model(s) has taken 0.05 seconds
100%|██████████████████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00,  5.90it/s]
To load target model AutoencoderKL█████████████████████████████████████████████████| 25/25 [00:04<00:00,  5.86it/s]
Begin to load 1 model
Moving model(s) has taken 0.02 seconds
postprocess_batch

0: 640x544 1 face, 8.8ms
Speed: 1.5ms preprocess, 8.8ms inference, 1.5ms postprocess per image at shape (1, 3, 640, 544)
WARNING:root:Sampler Scheduler autocorrection: "DPM++ 2M" -> "DPM++ 2M", "None" -> "Automatic"
To load target model SDXLClipModel
Begin to load 1 model
Moving model(s) has taken 0.01 seconds
To load target model SDXL
Begin to load 1 model
Moving model(s) has taken 0.02 seconds
100%|██████████████████████████████████████████████████████████████████████████████| 11/11 [00:01<00:00,  8.36it/s]
To load target model AutoencoderKL
Begin to load 1 model
Moving model(s) has taken 0.01 seconds
2024-07-22 17:39:04,501 - ControlNet - INFO - ControlNet Input Mode: InputMode.SIMPLE
2024-07-22 17:39:04,533 - ControlNet - INFO - Using preprocessor: canny
2024-07-22 17:39:04,533 - ControlNet - INFO - preprocessor resolution = 512
2024-07-22 17:39:05,184 - ControlNet - INFO - Current ControlNet ControlNetPatcher: G:\Stable difussion\stable-diffusion-webui-reForge\models\ControlNet\canny_sdxl.safetensors
Total progress: 100%|██████████████████████████████████████████████████████████████| 25/25 [00:09<00:00,  2.75it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████| 25/25 [00:09<00:00,  5.86it/s]
Yumeo0 commented 3 months ago

Gimme a minute. I have a mission now:

1) I will wipe all instances of each webui off of my PC. 2) Then I will delete all versions of python. 3) I will install python 3.10.6 4) And last but not least install reForge and try again...

Panchovix commented 3 months ago

I downloaded an openpose model and then it worked, even if not setting pre processor, but I had to choose a model yes or yes, else I would get a no error model (I guess that's forge behavior

imagen

024-07-22 17:52:08,062 - ControlNet - INFO - ControlNet Input Mode: InputMode.SIMPLE25/25 [00:08<00:00,  5.90it/s]
2024-07-22 17:52:08,063 - ControlNet - INFO - Using preprocessor: None
2024-07-22 17:52:08,063 - ControlNet - INFO - preprocessor resolution = 512
2024-07-22 17:52:08,099 - ControlNet - INFO - Current ControlNet ControlNetPatcher: G:\Stable difussion\stable-diffusion-webui-reForge\models\ControlNet\OpenPoseXL2.safetensors
To load target model SDXLClipModel
Begin to load 1 model
Moving model(s) has taken 0.01 seconds
2024-07-22 17:52:09,063 - ControlNet - INFO - ControlNet Method None patched.
2024-07-22 17:52:09,064 - ControlNet - INFO - ControlNet Method None patched.
To load target model SDXL
To load target model ControlNet
Begin to load 2 models
Moving model(s) has taken 0.06 seconds
100%|██████████████████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00,  5.68it/s]
To load target model AutoencoderKL█████████████████████████████████████████████████| 25/25 [00:04<00:00,  5.84it/s]
Begin to load 1 model
Moving model(s) has taken 0.02 seconds
postprocess_batch

0: 544x640 1 face, 32.9ms
Speed: 2.5ms preprocess, 32.9ms inference, 1.0ms postprocess per image at shape (1, 3, 544, 640)
WARNING:root:Sampler Scheduler autocorrection: "DPM++ 2M" -> "DPM++ 2M", "None" -> "Automatic"
To load target model SDXLClipModel
Begin to load 1 model
Moving model(s) has taken 0.01 seconds
To load target model SDXL
Begin to load 1 model
Moving model(s) has taken 0.03 seconds
100%|██████████████████████████████████████████████████████████████████████████████| 11/11 [00:01<00:00,  5.91it/s]
To load target model AutoencoderKL
Begin to load 1 model
Moving model(s) has taken 0.01 seconds
2024-07-22 17:52:17,966 - ControlNet - INFO - ControlNet Input Mode: InputMode.SIMPLE
2024-07-22 17:52:17,967 - ControlNet - INFO - Using preprocessor: None
2024-07-22 17:52:17,967 - ControlNet - INFO - preprocessor resolution = 512
2024-07-22 17:52:17,996 - ControlNet - INFO - Current ControlNet ControlNetPatcher: G:\Stable difussion\stable-diffusion-webui-reForge\models\ControlNet\OpenPoseXL2.safetensors
Total progress: 100%|██████████████████████████████████████████████████████████████| 25/25 [00:09<00:00,  2.73it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████| 25/25 [00:09<00:00,  5.84it/s]
Yumeo0 commented 3 months ago

I reinstalled everything completely from scratch and I still get an error... I might just give up and learn comfyui at this point 😭

Yumeo0 commented 3 months ago

It was the model. The model did not work. Using a different model fixed it. I'm gonna go burry myself now. So any "control lllite" model from kohya throws an error.

Yumeo0 commented 3 months ago

Does it happen because control lllite models are not implemented in the reForge ControlNet extension maybe?

Panchovix commented 3 months ago

Oh haha, well, glad you found the culprit for this case.

I have to check the no model loaded thing, I think when I have enough time will check how A1111 implements it and see how it works.

About the model, I think it supports normal controllite, but I guess it doesn't support controllllite, probably it's missing updates. I will try to check and understand how it works, and update accordingly. I think saw some comfy nodes doing this.

Closing the issue for now.

Panchovix commented 3 months ago

Okay so this issue happens with stock forge as well? (Same error?)

Yumeo0 commented 3 months ago

Already turned off my PC because I'm gonna sleep now. I am pretty sure it was the same error. You can probably reproduce it yourself though if you use any kohya control lllite model from here: http://huggingface.co/lllyasviel/sd_control_collection/tree/main

Panchovix commented 3 months ago

Thanks, gonna see if I can manage to get something today.

Panchovix commented 3 months ago

Okay found and it seems it's an issue with control net lite itself.

https://github.com/kohya-ss/ControlNet-LLLite-ComfyUI/issues/8

I'm working with only portrait images (height > width) and SOME of my input images give this same error and some do not. I haven't been able to figure out a pattern... but it's definitely not landscape-layout images for me. I am using a node that extracts the size of the input image and then I'm using that same size for the latent, so I'm always trying to generate an image with the same dimensions as the one that is input into the "Load LLLite" node.

I made a list of resolutions I've tested and if they work or not, if it helps:

works:
512x768
704x960
768x960
768x1024
1024x1280
1280x1600

fails:
1080x1440
1200x1440

Tested with those resolution and it works. Probably A1111 has some fixes related to this. For comfy you need a extra node from an extension.

imagen

2024-07-22 20:55:29,619 - ControlNet - INFO - ControlNet Input Mode: InputMode.SIMPLE██| 25/25 [00:04<00:00,  5.74it/s]
2024-07-22 20:55:29,640 - ControlNet - INFO - Using preprocessor: canny
2024-07-22 20:55:29,640 - ControlNet - INFO - preprocessor resolution = 512
2024-07-22 20:55:29,929 - ControlNet - INFO - Current ControlNet ControlLLLitePatcher: G:\Stable difussion\stable-diffusion-webui-reForge\models\ControlNet\controllllite\controllllite_v01032064e_sdxl_canny_anime.safetensors
To load target model SDXLClipModel
Begin to load 1 model
Moving model(s) has taken 0.01 seconds
136 modules
2024-07-22 20:55:30,915 - ControlNet - INFO - ControlNet Method canny patched.
136 modules
2024-07-22 20:55:31,060 - ControlNet - INFO - ControlNet Method canny patched.
To load target model SDXL
Begin to load 1 model
Moving model(s) has taken 0.12 seconds
  0%|                                                                                           | 0/25 [00:00<?, ?it/s]start LLLite: step 0
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00,  5.74it/s]
To load target model AutoencoderKL█████████████████████████████████████████████████████| 25/25 [00:04<00:00,  5.84it/s]
Begin to load 1 model
Moving model(s) has taken 0.05 seconds
postprocess_batch
Total progress: 100%|██████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00,  5.21it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00,  5.84it/s]

Will research how to fix when I have time, since it seems to be not trivial, but at least we know it should work.

CHollman823 commented 2 months ago

Hi Panchovix, I wrote that comment you quoted with the list of working resolutions and non-working resolutions. You've probably figured this out by now but just in case I found the common denominator... literally. It's 64. All of the resolutions that work are multiples of 64 and the ones that don't are not. I've tested several other odd-ball resolutions and they all work if they are multiples of 64 and don't otherwise. Hope this helps.