0xbitches / ComfyUI-LCM

Latent Consistency Model for ComfyUI
GNU General Public License v3.0
248 stars 16 forks source link

img2img pipeline set_timesteps error #17

Closed m4a1carbin4 closed 8 months ago

m4a1carbin4 commented 8 months ago

with diffusers==0.22.2

lcm_img2img_pipline.py line 304

self.scheduler.set_timesteps(strength, num_inference_steps, lcm_origin_steps)

I got this error :  
File "/home/waganawa/문서/NODE/fasteasysd/fasteasySD/venv/lib/python3.11/site-packages/diffusers/schedulers/scheduling_lcm.py", line 377, in set_timesteps
    timesteps = lcm_origin_timesteps[::-skipping_step][:num_inference_steps]
                ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
TypeError: slice indices must be integers or None or have an __index__ method

i think it's cause of strength is float type.

but in img2img it's hard to say use img2img with strength 1

so Is there any other way to solve this?

jojkaart commented 8 months ago

The code is written with 50 steps schedule and when you're doing less, it does less steps by skipping some of them. I suspect this error comes from trying to use settings that'd result in more than 50 steps with denoise strength of 1.0.

So, for example, if you're doing denoise strength of 0.1, you can do a maximum of 50 * 0.1 = 5 steps.

jojkaart commented 8 months ago

Oh also, the model has been trained with 50 step schedule, so generating step indices for more than 50 steps might not work quite as well. Though, it's probably worth testing how well it works with more.

jojkaart commented 8 months ago

I suspect simply passing a number of more than 50 as parameter for lcm_origin_steps would solve the TypeError, but is using the model in a way it wasn't trained for, so if it doesn't work correctly, that's why.

m4a1carbin4 commented 8 months ago

image

ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
  File "/home/waganawa/ComfyUI/execution.py", line 153, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/waganawa/ComfyUI/execution.py", line 83, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/waganawa/ComfyUI/execution.py", line 76, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/waganawa/ComfyUI/custom_nodes/ComfyUI-LCM/nodes.py", line 110, in sample
    self.pipe = LatentConsistencyModelImg2ImgPipeline.from_pretrained(
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/waganawa/ComfyUI/venv/lib/python3.11/site-packages/diffusers/pipelines/pipeline_utils.py", line 1105, in from_pretrained
    loaded_sub_model = load_sub_model(
                       ^^^^^^^^^^^^^^^
  File "/home/waganawa/ComfyUI/venv/lib/python3.11/site-packages/diffusers/pipelines/pipeline_utils.py", line 391, in load_sub_model
    class_obj, class_candidates = get_class_obj_and_candidates(
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/waganawa/ComfyUI/venv/lib/python3.11/site-packages/diffusers/pipelines/pipeline_utils.py", line 319, in get_class_obj_and_candidates
    class_obj = getattr(library, class_name)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/waganawa/ComfyUI/venv/lib/python3.11/site-packages/diffusers/utils/import_utils.py", line 677, in __getattr__
    raise AttributeError(f"module {self.__name__} has no attribute {name}")
AttributeError: module diffusers has no attribute LCMScheduler
m4a1carbin4 commented 8 months ago

I suspect simply passing a number of more than 50 as parameter for lcm_origin_steps would solve the TypeError, but is using the model in a way it wasn't trained for, so if it doesn't work correctly, that's why.

I use 4 step with prompt_strength 0.5 and also This error occurred in Python modules that I built based on this extension, and if my understanding of the code is correct, it seems to be correct that there is a problem with the pipeline code of img2img simple method.

m4a1carbin4 commented 8 months ago

image

ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
  File "/home/waganawa/ComfyUI/execution.py", line 153, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/waganawa/ComfyUI/execution.py", line 83, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/waganawa/ComfyUI/execution.py", line 76, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/waganawa/ComfyUI/custom_nodes/ComfyUI-LCM/nodes.py", line 110, in sample
    self.pipe = LatentConsistencyModelImg2ImgPipeline.from_pretrained(
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/waganawa/ComfyUI/venv/lib/python3.11/site-packages/diffusers/pipelines/pipeline_utils.py", line 1105, in from_pretrained
    loaded_sub_model = load_sub_model(
                       ^^^^^^^^^^^^^^^
  File "/home/waganawa/ComfyUI/venv/lib/python3.11/site-packages/diffusers/pipelines/pipeline_utils.py", line 391, in load_sub_model
    class_obj, class_candidates = get_class_obj_and_candidates(
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/waganawa/ComfyUI/venv/lib/python3.11/site-packages/diffusers/pipelines/pipeline_utils.py", line 319, in get_class_obj_and_candidates
    class_obj = getattr(library, class_name)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/waganawa/ComfyUI/venv/lib/python3.11/site-packages/diffusers/utils/import_utils.py", line 677, in __getattr__
    raise AttributeError(f"module {self.__name__} has no attribute {name}")
AttributeError: module diffusers has no attribute LCMScheduler

This error is currently occurs when I use the simple img2img sampler in the CompyUI environment. As a result of testing, other nodes also have the same problem. It seems that diffusers recently have a problem with version 0.22.0.dev0 or recommend using the newly added lcm scheduler.

jojkaart commented 8 months ago

I use 4 step with prompt_strength 0.5 and also This error occurred in Python modules that I built based on this extension, and if my understanding of the code is correct, it seems to be correct that there is a problem with the pipeline code of img2img simple method.

In that case it seems like a bug that's different from what I have encountered myself. Can you add a debug print just above the line that gives the exception you reported the first time? something like:

print("DEBUG: {} {} {} {}".format(lcm_origin_steps, len(lcm_origin_timesteps),skipping_step,num_inference_steps))

And then report here what it prints?

The other errors you've listed don't look like they've got anything to do with the first one. They look more like you're trying to use LCM with a too old of a version of diffusers to support it yet.

m4a1carbin4 commented 8 months ago

I use 4 step with prompt_strength 0.5 and also This error occurred in Python modules that I built based on this extension, and if my understanding of the code is correct, it seems to be correct that there is a problem with the pipeline code of img2img simple method.

In that case it seems like a bug that's different from what I have encountered myself. Can you add a debug print just above the line that gives the exception you reported the first time? something like:

print("DEBUG: {} {} {} {}".format(lcm_origin_steps, len(lcm_origin_timesteps),skipping_step,num_inference_steps))

And then report here what it prints?

The other errors you've listed don't look like they've got anything to do with the first one. They look more like you're trying to use LCM with a too old of a version of diffusers to support it yet.

well the problem is that my diffusers is v0.22.1 I've seen that the lcm extension worked successfully in this version. and also when i update diffusers with 0.22.2 It also didn't work.

And most importantly, this extension doesn't work right now.

m4a1carbin4 commented 8 months ago

I use 4 step with prompt_strength 0.5 and also This error occurred in Python modules that I built based on this extension, and if my understanding of the code is correct, it seems to be correct that there is a problem with the pipeline code of img2img simple method.

In that case it seems like a bug that's different from what I have encountered myself. Can you add a debug print just above the line that gives the exception you reported the first time? something like:

print("DEBUG: {} {} {} {}".format(lcm_origin_steps, len(lcm_origin_timesteps),skipping_step,num_inference_steps))

And then report here what it prints?

The other errors you've listed don't look like they've got anything to do with the first one. They look more like you're trying to use LCM with a too old of a version of diffusers to support it yet.

And for requested debugging

 print("DEBUG: {} {} {} {}".format(lcm_origin_steps, len(lcm_origin_timesteps),skipping_step,num_inference_steps))
                                                            ^^^^^^^^^^^^^^^^^^^^
NameError: name 'lcm_origin_timesteps' is not defined. Did you mean: 'lcm_origin_steps'?

it's broken.

m4a1carbin4 commented 8 months ago

https://github.com/0xbitches/ComfyUI-LCM/issues/15#issuecomment-1798484992 As a result of checking through this previous issue, the problem caused by the official reflection of the diffusers update seems to be correct. Currently, older versions must be forced to use this extension.

jojkaart commented 8 months ago

And for requested debugging

 print("DEBUG: {} {} {} {}".format(lcm_origin_steps, len(lcm_origin_timesteps),skipping_step,num_inference_steps))
                                                            ^^^^^^^^^^^^^^^^^^^^
NameError: name 'lcm_origin_timesteps' is not defined. Did you mean: 'lcm_origin_steps'?

it's broken.

Well, the name error is about a variable that's being used on the very line your first exception printed out, so it can't be undefined if you placed the debug print where I asked you to.

If you're capable of doing simple debugging, like this, please do so and provide the data. Otherwise, I'm afraid this is as much as I can assist you with this problem.

Tavius02 commented 8 months ago
Error occurred when executing LCM_img2img_Sampler_Advanced:

slice indices must be integers or None or have an __index__ method

  File "C:\Users\mjsin\Downloads\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\mjsin\Downloads\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\mjsin\Downloads\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\mjsin\Downloads\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-LCM\nodes.py", line 260, in sample
    result = self.pipe(
             ^^^^^^^^^^
  File "C:\Users\mjsin\Downloads\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\mjsin\Downloads\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-LCM\lcm\lcm_i2i_pipeline.py", line 304, in __call__
    self.scheduler.set_timesteps(strength, num_inference_steps, lcm_origin_steps)
  File "C:\Users\mjsin\Downloads\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\schedulers\scheduling_lcm.py", line 377, in set_timesteps
    timesteps = lcm_origin_timesteps[::-skipping_step][:num_inference_steps]
                ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^

I've been having trouble with img2img today too, I think I have the same error as the first comment? The initial comment's error looks the same as the end of this. I'm afraid I don't know much about python/coding, so if there's any other information that'd be useful I'm happy to try and provide it - I've previously used img2img without a problem. The Comfy UI workflow that's generating the error is this, which is just the default setup from this page for LCM img2img advanced. image

The text workflows are working fine.

m4a1carbin4 commented 8 months ago
Error occurred when executing LCM_img2img_Sampler_Advanced:

slice indices must be integers or None or have an __index__ method

  File "C:\Users\mjsin\Downloads\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\mjsin\Downloads\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\mjsin\Downloads\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\mjsin\Downloads\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-LCM\nodes.py", line 260, in sample
    result = self.pipe(
             ^^^^^^^^^^
  File "C:\Users\mjsin\Downloads\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\mjsin\Downloads\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-LCM\lcm\lcm_i2i_pipeline.py", line 304, in __call__
    self.scheduler.set_timesteps(strength, num_inference_steps, lcm_origin_steps)
  File "C:\Users\mjsin\Downloads\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\schedulers\scheduling_lcm.py", line 377, in set_timesteps
    timesteps = lcm_origin_timesteps[::-skipping_step][:num_inference_steps]
                ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^

I've been having trouble with img2img today too, I think I have the same error as the first comment? The initial comment's error looks the same as the end of this. I'm afraid I don't know much about python/coding, so if there's any other information that'd be useful I'm happy to try and provide it - I've previously used img2img without a problem. The Comfy UI workflow that's generating the error is this, which is just the default setup from this page for LCM img2img advanced. image

The text workflows are working fine.

check this comment. https://github.com/0xbitches/ComfyUI-LCM/issues/15#issuecomment-1798484992 It seems that the LCMS sampler, which was officially registered in the process of applying the change on diffusers' side recently, is not linked to the pipeline code of this extension code. If you force the revision to load it, it's available.

Tavius02 commented 8 months ago

Thanks - I've tried to follow the advice, but I think I must have done it incorrectly. I added the code to nodes.py at line 112 in ComfyUI-LCM and restarted ComfyUI, but it doesn't seem to have done the trick, the error stays the same.

image

I assume there's some extra step I've missed out to make it load the right thing? If you've any idea what else I need to do I'd really appreciate it.

m4a1carbin4 commented 8 months ago

Thanks - I've tried to follow the advice, but I think I must have done it incorrectly. I added the code to nodes.py at line 112 in ComfyUI-LCM and restarted ComfyUI, but it doesn't seem to have done the trick, the error stays the same.

image

I assume there's some extra step I've missed out to make it load the right thing? If you've any idea what else I need to do I'd really appreciate it.

oh sorry for late

I have studied this error a little more. There is also a problem with the LCMS scheduler that was recently added by a diffuser rather than a pipeline made by this repo.

in the top row "from .lcm.lcm_scheduler import LCMScheduler" "from .lcm.lcm_scheduler import LCMSscheduler_local" You can modify it like this

and in middle

"self.scheduler = LCMScheduler~~" "self."Scheduler = LCMSscheduler_local~~" Try modifying it like this.

Tavius02 commented 8 months ago

Can't seem to get it working unfortunately, thanks anyway though, appreciate you taking the time to help. Fortunately, using the new LCM loras seems to be working well as a substitute for img2img, so the bug's come in at the perfect time!

m4a1carbin4 commented 8 months ago

oh never mind, I'm also busy with testing LCM LORA, thank you for your reply anyway!