alpertunga-bile / prompt-generator-comfyui

Custom AI prompt generator node for the ComfyUI
MIT License
74 stars 8 forks source link

[BUG] !!! Exception during processing !!! #18

Open Myka88 opened 1 day ago

Myka88 commented 1 day ago

Despite following all the steps given in repository and updated dependencies i still got errors

Actual Behavior: AssertionError Prompt Generator C:\Users\K\AppData\Local\Programs\Python\Python310\lib\distutils\core.py

Steps to Reproduce: Run the workflow

Debug Logs

2024-09-20 00:35:41,025 - root - INFO - got prompt
2024-09-20 00:35:41,058 - root - INFO - Using xformers attention in VAE
2024-09-20 00:35:41,059 - root - INFO - Using xformers attention in VAE
2024-09-20 00:35:41,354 - root - INFO - model weight dtype torch.float16, manual cast: None
2024-09-20 00:35:41,355 - root - INFO - model_type EPS
2024-09-20 00:35:42,169 - root - INFO - Using xformers attention in VAE
2024-09-20 00:35:42,171 - root - INFO - Using xformers attention in VAE
2024-09-20 00:35:42,992 - root - INFO - Requested to load SDXLClipModel
2024-09-20 00:35:42,992 - root - INFO - Loading 1 new model
2024-09-20 00:35:43,286 - root - INFO - loaded completely 0.0 1560.802734375 True
2024-09-20 00:35:43,553 - root - ERROR - !!! Exception during processing !!! C:\Users\K\AppData\Local\Programs\Python\Python310\lib\distutils\core.py
2024-09-20 00:35:43,558 - root - ERROR - Traceback (most recent call last):
  File "E:\ComfyUI\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "E:\ComfyUI\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "E:\ComfyUI\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "E:\ComfyUI\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "E:\ComfyUI\ComfyUI\custom_nodes\prompt-generator-comfyui\prompt_generator.py", line 279, in generate
    generator = Generator(model_path, is_accelerate, quantize)
  File "E:\ComfyUI\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\generate.py", line 47, in __init__
    self.model, self.tokenizer = get_model_tokenizer(
  File "E:\ComfyUI\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\model.py", line 149, in get_model_tokenizer
    model = get_model(model_path, quant_type)
  File "E:\ComfyUI\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\model.py", line 117, in get_model
    model = get_model_from_base(model_name, req_torch_dtype, type)
  File "E:\ComfyUI\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\model.py", line 76, in get_model_from_base
    from optimum.quanto import qfloat8, qint8, qint4, quantize, freeze
  File "C:\Users\K\AppData\Local\Programs\Python\Python310\lib\site-packages\optimum\quanto\__init__.py", line 18, in <module>
    from .library import *
  File "C:\Users\K\AppData\Local\Programs\Python\Python310\lib\site-packages\optimum\quanto\library\__init__.py", line 15, in <module>
    from .extensions import *
  File "C:\Users\K\AppData\Local\Programs\Python\Python310\lib\site-packages\optimum\quanto\library\extensions\__init__.py", line 17, in <module>
    from .cpp import *
  File "C:\Users\K\AppData\Local\Programs\Python\Python310\lib\site-packages\optimum\quanto\library\extensions\cpp\__init__.py", line 19, in <module>
    from ..extension import Extension
  File "C:\Users\K\AppData\Local\Programs\Python\Python310\lib\site-packages\optimum\quanto\library\extensions\extension.py", line 7, in <module>
    from torch.utils.cpp_extension import load
  File "C:\Users\K\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\cpp_extension.py", line 10, in <module>
    import setuptools
  File "C:\Users\K\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\__init__.py", line 8, in <module>
    import _distutils_hack.override  # noqa: F401
  File "C:\Users\K\AppData\Local\Programs\Python\Python310\lib\site-packages\_distutils_hack\override.py", line 1, in <module>
    __import__('_distutils_hack').do_override()
  File "C:\Users\K\AppData\Local\Programs\Python\Python310\lib\site-packages\_distutils_hack\__init__.py", line 77, in do_override
    ensure_local_distutils()
  File "C:\Users\K\AppData\Local\Programs\Python\Python310\lib\site-packages\_distutils_hack\__init__.py", line 64, in ensure_local_distutils
    assert '_distutils' in core.__file__, core.__file__
AssertionError: C:\Users\K\AppData\Local\Programs\Python\Python310\lib\distutils\core.py

error_1

alpertunga-bile commented 13 hours ago

Hello @Myka88, thanks for reporting. There seems to be a problem with the package optimum-quanto. I am using version 0.2.4, you can check yours with the command pip show optimum-quanto.

In previous versions of this package, the required torch version was >= 2.2.0 and the script checks against this version. It seems that they updated the requirement to >= 2.4.0, but this is not updated in the installation script, which may be causing the problem.

I may have missed this issue since my torch package is currently at version 2.4.0+cu124. Could you check the version of your optimum-quanto package and torch package and update them to the latest versions? If the problem persists after the updates, please let me know.

Edit: I upgraded the packages which are used by the repository to the latest versions and check with the workflow. The node seems working.