yolain / ComfyUI-Easy-Use

In order to make it easier to use the ComfyUI, I have made some optimizations and integrations to some commonly used nodes.
GNU General Public License v3.0
1.09k stars 78 forks source link

[ERROR] ModuleNotFoundError: No module named 'comfy.text_encoders' and ModuleNotFoundError: No module named 'comfy.sd3_clip' #396

Open rodrigoaustincascao opened 2 months ago

rodrigoaustincascao commented 2 months ago

Hi!

I'm installing the ComfyUI-Easy-Use package on a ComfyUI in Docker and I'm receiving the following errors:

ModuleNotFoundError: No module named 'comfy.text_encoders' ModuleNotFoundError: No module named 'comfy.sd3_clip'

How can I resolve it?

Below is the complete log

` Attaching to ollama, comfy-1 ollama | 2024/09/21 18:53:24 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost: https://localhost: http://127.0.0.1 https://127.0.0.1 http://127.0.0.1: https://127.0.0.1: http://0.0.0.0 https://0.0.0.0 http://0.0.0.0: https://0.0.0.0: app:// file:// tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" ollama | time=2024-09-21T18:53:24.383Z level=INFO source=images.go:753 msg="total blobs: 0" ollama | time=2024-09-21T18:53:24.383Z level=INFO source=images.go:760 msg="total unused blobs removed: 0" ollama | time=2024-09-21T18:53:24.383Z level=INFO source=routes.go:1200 msg="Listening on [::]:11434 (version 0.3.11)" ollama | time=2024-09-21T18:53:24.383Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]" ollama | time=2024-09-21T18:53:24.384Z level=INFO source=gpu.go:199 msg="looking for compatible GPUs" comfy-1 | Mounted .cache comfy-1 | Mounted comfy comfy-1 | Mounted input ollama | time=2024-09-21T18:53:24.495Z level=INFO source=types.go:107 msg="inference compute" id=GPU-c14afc4c-c1e3-7c19-1c3a-9e58436cc849 library=cuda variant=v12 compute=6.1 driver=12.4 name="NVIDIA GeForce GTX 1050" total="2.9 GiB" available="2.8 GiB" comfy-1 | Total VRAM 3012 MB, total RAM 31923 MB comfy-1 | pytorch version: 2.3.0 comfy-1 | Set vram state to: NORMAL_VRAM comfy-1 | Device: cuda:0 NVIDIA GeForce GTX 1050 : cudaMallocAsync comfy-1 | VAE dtype: torch.float32 comfy-1 | Using pytorch cross attention comfy-1 | ** User settings have been changed to be stored on the server instead of browser storage. ** comfy-1 | ** For multi-user setups add the --multi-user CLI argument to enable multiple user profiles. ** comfy-1 | Adding extra search path checkpoints /data/models/Stable-diffusion comfy-1 | Adding extra search path configs /data/models/Stable-diffusion comfy-1 | Adding extra search path vae /data/models/VAE comfy-1 | Adding extra search path loras /data/models/Lora comfy-1 | Adding extra search path upscale_models /data/models/RealESRGAN comfy-1 | Adding extra search path upscale_models /data/models/ESRGAN comfy-1 | Adding extra search path upscale_models /data/models/SwinIR comfy-1 | Adding extra search path upscale_models /data/models/GFPGAN comfy-1 | Adding extra search path hypernetworks /data/models/hypernetworks comfy-1 | Adding extra search path controlnet /data/models/ControlNet comfy-1 | Adding extra search path gligen /data/models/GLIGEN comfy-1 | Adding extra search path clip /data/models/CLIPEncoder comfy-1 | Adding extra search path embeddings /data/embeddings comfy-1 | Adding extra search path custom_nodes /data/config/comfy/custom_nodes comfy-1 | ### Loading: ComfyUI-Manager (V2.51) comfy-1 | [ComfyUI-Manager] Some features are restricted due to your ComfyUI being outdated. comfy-1 | ### ComfyUI Revision: 2197 [276f8fce] | Released on '2024-05-20' comfy-1 | (pysssss:WD14Tagger) [DEBUG] Available ORT providers: AzureExecutionProvider, CPUExecutionProvider comfy-1 | (pysssss:WD14Tagger) [DEBUG] Using ORT providers: CUDAExecutionProvider, CPUExecutionProvider comfy-1 | [Crystools INFO] Crystools version: 1.17.0 comfy-1 | [Crystools INFO] CPU: Intel(R) Core(TM) i5-9300H CPU @ 2.40GHz - Arch: x86_64 - OS: Linux 6.8.0-45-generic comfy-1 | [Crystools INFO] Pynvml (Nvidia) initialized. comfy-1 | [Crystools INFO] GPU/s: comfy-1 | [Crystools INFO] 0) NVIDIA GeForce GTX 1050 comfy-1 | [Crystools INFO] NVIDIA Driver: 550.107.02 comfy-1 | [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json comfy-1 | [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json comfy-1 | [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json comfy-1 | Traceback (most recent call last): comfy-1 | File "/data/config/comfy/custom_nodes/ComfyUI-Easy-Use/py/libs/adv_encode.py", line 9, in comfy-1 | from comfy.text_encoders.sd3_clip import SD3ClipModel, T5XXLModel

comfy-1 | ModuleNotFoundError: No module named 'comfy.text_encoders'

comfy-1 | comfy-1 | During handling of the above exception, another exception occurred: comfy-1 | comfy-1 | Traceback (most recent call last): comfy-1 | File "/stable-diffusion/nodes.py", line 1879, in load_custom_node comfy-1 | module_spec.loader.exec_module(module) comfy-1 | File "", line 883, in exec_module comfy-1 | File "", line 241, in _call_with_frames_removed comfy-1 | File "/data/config/comfy/custom_nodes/ComfyUI-Easy-Use/init.py", line 21, in comfy-1 | imported_module = importlib.import_module(".py.{}".format(module_name), name) comfy-1 | File "/opt/conda/lib/python3.10/importlib/init.py", line 126, in import_module comfy-1 | return _bootstrap._gcd_import(name[level:], package, level) comfy-1 | File "", line 1050, in _gcd_import comfy-1 | File "", line 1027, in _find_and_load comfy-1 | File "", line 1006, in _find_and_load_unlocked comfy-1 | File "", line 688, in _load_unlocked comfy-1 | File "", line 883, in exec_module comfy-1 | File "", line 241, in _call_with_frames_removed comfy-1 | File "/data/config/comfy/custom_nodes/ComfyUI-Easy-Use/py/easyNodes.py", line 22, in comfy-1 | from .libs.adv_encode import advanced_encode comfy-1 | File "/data/config/comfy/custom_nodes/ComfyUI-Easy-Use/py/libs/adv_encode.py", line 11, in comfy-1 | from comfy.sd3_clip import SD3ClipModel, T5XXLModel

comfy-1 | ModuleNotFoundError: No module named 'comfy.sd3_clip'

comfy-1 | comfy-1 | Cannot import /data/config/comfy/custom_nodes/ComfyUI-Easy-Use module for custom nodes: No module named 'comfy.sd3_clip' comfy-1 | [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json comfy-1 | [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json comfy-1 | comfy-1 | Import times for custom nodes: comfy-1 | 0.0 seconds: /stable-diffusion/custom_nodes/websocket_image_save.py comfy-1 | 0.0 seconds: /data/config/comfy/custom_nodes/ComfyUI-WD14-Tagger comfy-1 | 0.0 seconds: /data/config/comfy/custom_nodes/ComfyUI-Custom-Scripts comfy-1 | 0.0 seconds: /data/config/comfy/custom_nodes/ComfyUI-Crystools comfy-1 | 0.1 seconds: /data/config/comfy/custom_nodes/ComfyUI-Manager comfy-1 | 0.2 seconds (IMPORT FAILED): /data/config/comfy/custom_nodes/ComfyUI-Easy-Use comfy-1 | 0.2 seconds: /data/config/comfy/custom_nodes/comfyui-ollama comfy-1 | comfy-1 | Starting server comfy-1 | comfy-1 | To see the GUI go to: http://0.0.0.0:7860 `

yolain commented 2 months ago

The version of comfyui is outdated.

rodrigoaustincascao commented 2 months ago

My versions are:

Keywords:Long CLIP, FLUX Inpaint CNet, TorchCompileModel, v0.2.2

ComfyUI v0.2.2 Release

https://github.com/comfyanonymous/ComfyUI/releases/tag/v0.2.2 Mistoline Flux controlnet support pytorch 2.4.1+cu124 (portable) Make live preview size a configurable launch argument (--preview-size) Feature/Update News:

[Long CLIP] Long CLIP L support for SDXL, SD3 and Flux. (NOTE: Use the CLIPLoader) [FLUX Inpaint CNet] Support AliMama SD3 and Flux inpaint controlnets. [TorchCompileModel] A 'TorchCompileModel' node has been added that can improve performance. When the model or resolution is changed, a one-time, very long preparation time in KSampler is required. Issue News:

ComfyUI: 2197276f8f Manager: V2.51

yolain commented 2 months ago

ComfyUI Revision: 2197 [276f8fce] | Released on '2024-05-20'

rodrigoaustincascao commented 2 months ago

I found out why it's not up to date. I'm using https://github.com/AbdBarho/stable-diffusion-webui-docker/tree/master and in the dockerfile it does a git reset. Just comment and it worked.

Thank you for your attention.