comfyanonymous / ComfyUI

The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
GNU General Public License v3.0
40.95k stars 4.36k forks source link

Missing Models and Custom Nodes in ComfyUI, including IP-Adapters (I would like to contribute and try fix this ) #3587

Open MiladZarour opened 1 month ago

MiladZarour commented 1 month ago

When using ComfyUI and running run_with_gpu.bat, importing a JSON file may result in missing nodes. This issue can be easily fixed by opening the manager and clicking on "Install Missing Nodes," allowing us to check and install the required nodes.

However, this functionality does not extend to missing models or IP-adapters.

To address this, I suggest implementing a feature that allows the installation of missing models and other components directly from the manager tab when importing a JSON file. This would streamline the process and ensure all necessary components are installed seamlessly.

I would like to contribute and try fixing this issue.

shawnington commented 1 month ago

Write the code, open a pull request.

MiladZarour commented 1 month ago

how to start the main.py script ? I am getting this error when I run it ?

  File "D:\Fix_Comfyui\ComfyUI\comfy\model_management.py", line 89, in get_torch_device
    return torch.device(torch.cuda.current_device())
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Fix_Comfyui\ComfyUI\venv\Lib\site-packages\torch\cuda\__init__.py", line 778, in current_device
    _lazy_init()
  File "D:\Fix_Comfyui\ComfyUI\venv\Lib\site-packages\torch\cuda\__init__.py", line 284, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

Process finished with exit code 1 I want to test my code if there is any another way ?

MiladZarour commented 1 month ago

oh CUDA wasn't installed I think now should works..

MiladZarour commented 1 month ago

to run the main.py I think I have to run same as the run_nvidia_gpu.bat ? python.exe -s ComfyUI\main.py --windows-standalone-build

MiladZarour commented 1 month ago

I installed CUDA , and installed all the requirement.txt and running the main.py : D:\\Fix_Comfyui\\ComfyUI\\venv\\Scripts\\python.exe -s D:\\Fix_Comfyui\\ComfyUI\\main.py --windows-standalone-build getting this error :

Traceback (most recent call last):
  File "D:\Fix_Comfyui\ComfyUI\main.py", line 79, in <module>
    import execution
  File "D:\Fix_Comfyui\ComfyUI\execution.py", line 11, in <module>
    import nodes
  File "D:\Fix_Comfyui\ComfyUI\nodes.py", line 21, in <module>
    import comfy.diffusers_load
  File "D:\Fix_Comfyui\ComfyUI\comfy\diffusers_load.py", line 3, in <module>
    import comfy.sd
  File "D:\Fix_Comfyui\ComfyUI\comfy\sd.py", line 5, in <module>
    from comfy import model_management
  File "D:\Fix_Comfyui\ComfyUI\comfy\model_management.py", line 120, in <module>
    total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
                                  ^^^^^^^^^^^^^^^^^^
  File "D:\Fix_Comfyui\ComfyUI\comfy\model_management.py", line 89, in get_torch_device
    return torch.device(torch.cuda.current_device())
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Fix_Comfyui\ComfyUI\venv\Lib\site-packages\torch\cuda\__init__.py", line 778, in current_device
    _lazy_init()
  File "D:\Fix_Comfyui\ComfyUI\venv\Lib\site-packages\torch\cuda\__init__.py", line 284, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

and running :

Milad@Milad MINGW64 /d/Fix_Comfyui/ComfyUI (master)
$ D:\\Fix_Comfyui\\ComfyUI\\venv\\Scripts\\python.exe D:\\Fix_Comfyui\\ComfyUI\\main.py
Traceback (most recent call last):
  File "D:\Fix_Comfyui\ComfyUI\main.py", line 79, in <module>
    import execution
  File "D:\Fix_Comfyui\ComfyUI\execution.py", line 11, in <module>
    import nodes
  File "D:\Fix_Comfyui\ComfyUI\nodes.py", line 21, in <module>
    import comfy.diffusers_load
  File "D:\Fix_Comfyui\ComfyUI\comfy\diffusers_load.py", line 3, in <module>
    import comfy.sd
  File "D:\Fix_Comfyui\ComfyUI\comfy\sd.py", line 5, in <module>
    from comfy import model_management
  File "D:\Fix_Comfyui\ComfyUI\comfy\model_management.py", line 120, in <module>
    total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
                                  ^^^^^^^^^^^^^^^^^^
  File "D:\Fix_Comfyui\ComfyUI\comfy\model_management.py", line 89, in get_torch_device
    return torch.device(torch.cuda.current_device())
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Fix_Comfyui\ComfyUI\venv\Lib\site-packages\torch\cuda\__init__.py", line 778, in current_device
    _lazy_init()
  File "D:\Fix_Comfyui\ComfyUI\venv\Lib\site-packages\torch\cuda\__init__.py", line 284, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
MiladZarour commented 1 month ago

it works now ! image

D:\Fix_Comfyui\ComfyUI\venv\Scripts\python.exe D:\Fix_Comfyui\ComfyUI\main.py 
Total VRAM 8192 MB, total RAM 65365 MB
pytorch version: 2.3.0+cu118
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3060 Ti : cudaMallocAsync
VAE dtype: torch.bfloat16
Using pytorch cross attention
D:\Fix_Comfyui\ComfyUI\comfy\extra_samplers\uni_pc.py:19: SyntaxWarning: invalid escape sequence '\h'
  """Create a wrapper class for the forward SDE (VP type).
****** User settings have been changed to be stored on the server instead of browser storage. ******
****** For multi-user setups add the --multi-user CLI argument to enable multiple user profiles. ******

Import times for custom nodes:
   0.0 seconds: D:\Fix_Comfyui\ComfyUI\custom_nodes\websocket_image_save.py

Starting server

To see the GUI go to: http://127.0.0.1:8188

Now I will start !

MiladZarour commented 1 month ago

and I realised that I actually forked the wrong Comfyui 😁

I should forked https://github.com/ltdrdata/ComfyUI-Manager actually, because there where I should edit I believe.....

haraeza commented 1 month ago

I get the same error. How did you fix it?

VantomPayne commented 1 month ago

Same error here.

doctorpangloss commented 3 weeks ago

take a look at https://github.com/hiddenswitch/ComfyUI?tab=readme-ov-file#installing for better support for automatic model downloading