gokayfem / ComfyUI_VLM_nodes

Custom ComfyUI nodes for Vision Language Models, Large Language Models, Image to Music, Text to Music, Consistent and Random Creative Prompt Generation
Apache License 2.0
308 stars 24 forks source link

Mac: KeyError: 'flags' (dict returned by get_cpu_info does not contain 'flags') #13

Closed dfl closed 5 months ago

dfl commented 5 months ago

hello, thanks for all your work on this project!

I'm running on a Mac M3, and get_cpu_info() dict does not contain 'flags' key.

>>> from cpuinfo import get_cpu_info
>>> get_cpu_info()
{'python_version': '3.11.7.final.0 (64 bit)', 'cpuinfo_version': [9, 0, 0], 'cpuinfo_version_string': '9.0.0', 'arch': 'ARM_8', 'bits': 64, 'count': 16, 'arch_string_raw': 'arm64', 'brand_raw': 'Apple M3 Max'}

This makes it work:

diff --git a/install_init.py b/install_init.py
index 889c151..024526c 100644
--- a/install_init.py
+++ b/install_init.py
@@ -49,7 +49,7 @@ def get_system_info():

     # Check for AVX2 support
     if importlib.util.find_spec('cpuinfo'):        
-        system_info['avx2'] = 'avx2' in cpuinfo.get_cpu_info()['flags']
+        system_info['avx2'] = 'avx2' in cpuinfo.get_cpu_info().get('flags',[])
dfl commented 5 months ago

PS I also had a dependency error with the latest diffusers, which requires huggingface-hub==0.20.3 (instead of 0.20.1)

dfl commented 5 months ago

one more issue...

Installing llama-cpp-python...
ERROR: llama_cpp_python-0.2.43-macosx_14_0_arm64.whl is not a valid wheel filename.
Traceback (most recent call last):
  File "/Users/dfl/sd/ComfyUI/nodes.py", line 1893, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/Users/dfl/sd/ComfyUI/custom_nodes/ComfyUI_VLM_nodes/__init__.py", line 30, in <module>
    install_llama(system_info)
  File "/Users/dfl/sd/ComfyUI/custom_nodes/ComfyUI_VLM_nodes/install_init.py", line 93, in install_llama
    install_package("llama-cpp-python", custom_command=custom_command)
  File "/Users/dfl/sd/ComfyUI/custom_nodes/ComfyUI_VLM_nodes/install_init.py", line 73, in install_package
    subprocess.check_call(command)
  File "/opt/homebrew/Cellar/python@3.11/3.11.7_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/subprocess.py", line 413, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/Users/dfl/sd/ComfyUI/venv/bin/python', '-m', 'pip', 'install', 'llama-cpp-python', '--no-cache-dir', 'https://github.com/abetlen/llama-cpp-python/releases/download/v0.2.43/llama_cpp_python-0.2.43-macosx_14_0_arm64.whl']' returned non-zero exit status 1.

Cannot import /Users/dfl/sd/ComfyUI/custom_nodes/ComfyUI_VLM_nodes module for custom nodes: Command '['/Users/dfl/sd/ComfyUI/venv/bin/python', '-m', 'pip', 'install', 'llama-cpp-python', '--no-cache-dir', 'https://github.com/abetlen/llama-cpp-python/releases/download/v0.2.43/llama_cpp_python-0.2.43-macosx_14_0_arm64.whl']' returned non-zero exit status 1.

installing manually worked:

(base) ➜  ComfyUI git:(master) ✗ ./venv/bin/pip install llama-cpp-python
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting llama-cpp-python
  Downloading llama_cpp_python-0.2.43.tar.gz (36.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 36.6/36.6 MB 14.8 MB/s eta 0:00:00
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: typing-extensions>=4.5.0 in ./venv/lib/python3.11/site-packages (from llama-cpp-python) (4.8.0)
Requirement already satisfied: numpy>=1.20.0 in ./venv/lib/python3.11/site-packages (from llama-cpp-python) (1.24.4)
Requirement already satisfied: diskcache>=5.6.1 in ./venv/lib/python3.11/site-packages (from llama-cpp-python) (5.6.3)
Requirement already satisfied: jinja2>=2.11.3 in ./venv/lib/python3.11/site-packages (from llama-cpp-python) (3.1.2)
Requirement already satisfied: MarkupSafe>=2.0 in ./venv/lib/python3.11/site-packages (from jinja2>=2.11.3->llama-cpp-python) (2.1.3)
Building wheels for collected packages: llama-cpp-python
  Building wheel for llama-cpp-python (pyproject.toml) ... done
  Created wheel for llama-cpp-python: filename=llama_cpp_python-0.2.43-cp311-cp311-macosx_14_0_arm64.whl size=2265033 sha256=afb33ffff14641c2d0da077601821e547400e2b6957060a56727987a9339708e
  Stored in directory: /private/var/folders/71/7jzsfc7x6xx8t1fyzxphfhgh0000gn/T/pip-ephem-wheel-cache-zw0tsmjq/wheels/2a/9d/08/b558eac0caff83868235db10ed76b5b1d2ec06276cdd26dc5d
Successfully built llama-cpp-python
Installing collected packages: llama-cpp-python
Successfully installed llama-cpp-python-0.2.43
dfl commented 5 months ago

and finally: AssertionError: Torch not compiled with CUDA enabled

joytag_patch_for_no_cuda.txt

gokayfem commented 5 months ago

is it working after this changes

dfl commented 5 months ago

yes, after changing CUDA stuff it works 👍🏼On 15 Feb 2024, at 12:28, gokayfem @.***> wrote: is it working after this changes

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.***>

yiwangsimple commented 5 months ago

mac m1max ,This node cannot be installed properly. How to deal with it?

gokayfem commented 5 months ago

autogptq library does not support macOS. so i didnt implement other things suggested in this issue for macOS also. i might need to create another branch only for mac. i dont own a mac, i cant quickly iterate on that sorry.

gokayfem commented 5 months ago

mac m1max ,This node cannot be installed properly. How to deal with it?

i created another branch called "mac" and implemented the suggested things in this issue. if you can try to download the mac branch and put in your custom_nodes folder you can use it i think. if you get any error please let me now in this issue.

yiwangsimple commented 5 months ago

mac m1max ,This node cannot be installed properly. How to deal with it?

i created another branch called "mac" and implemented the suggested things in this issue. if you can try to download the mac branch and put in your custom_nodes folder you can use it i think. if you get any error please let me now in this issue.

Great, but I just tried to install it and it still won't import!

gokayfem commented 5 months ago

can you provide me the errors please. i dont have a mac, i cant check it.

yiwangsimple commented 5 months ago

can you provide me the errors please. i dont have a mac, i cant check it.我没有 Mac,无法检查。

Here is the error message from the terminal: Traceback (most recent call last): File "/Users/weiwei/ComfyUI/nodes.py", line 1887, in load_custom_node module_spec.loader.exec_module(module) File "", line 883, in exec_module File "", line 241, in _call_with_frames_removed File "/Users/weiwei/ComfyUI/custom_nodes/ComfyUI_VLM_nodes/init.py", line 29, in system_info = get_system_info() File "/Users/weiwei/ComfyUI/custom_nodes/ComfyUI_VLM_nodes/install_init.py", line 52, in get_system_info system_info['avx2'] = 'avx2' in cpuinfo.get_cpu_info()['flags'] KeyError: 'flags'

Cannot import /Users/weiwei/ComfyUI/custom_nodes/ComfyUI_VLM_nodes module for custom nodes: 'flags'

gokayfem commented 5 months ago

1-) First go to the mac branch

image

2-) Then download the zip

image

3-) Delete original VLM nodes

image

4-)Extract the zip then try to start comfyui again.

yiwangsimple commented 5 months ago

Thanks so much, the install worked with the zip!