Hangover3832 / ComfyUI-Hangover-Moondream

Moondream is a lightweight multimodal large language model
https://github.com/Hangover3832/ComfyUI-Hangover-Moondream
Apache License 2.0
40 stars 6 forks source link

Error on first use #4

Closed amarillosebas closed 7 months ago

amarillosebas commented 7 months ago

trust_remote_code is set to true. And yes, the error messages ends like that. Seems incomplete. This is the error message shown in ComfyUI.

Error occurred when executing Moondream Interrogator (NO COMMERCIAL USE):

cannot import name 'ToImage' from 'torchvision.transforms.v2' (F:\AI\ComfyUI\python_embeded\lib\site-packages\torchvision\transforms\v2__init__.py)

File "F:\AI\ComfyUI\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "F:\AI\ComfyUI\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "F:\AI\ComfyUI\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "F:\AI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Hangover-Moondream\ho_moondream.py", line 56, in interrogate self.model = AutoModelForCausalLM.from_pretrained(huggingface_model, trust_remote_code=trust_remote_code).to(dev) File "F:\AI\ComfyUI\python_embeded\lib\site-packages\transformers\models\auto\auto_factory.py", line 455, in from_pretrained model_class = get_class_from_dynamic_module( File "F:\AI\ComfyUI\python_embeded\lib\site-packages\transformers\dynamic_module_utils.py", line 374, in get_class_from_dynamic_module return get_class_in_module(class_name, final_module.replace(".py", "")) File "F:\AI\ComfyUI\python_embeded\lib\site-packages\transformers\dynamic_module_utils.py", line 147, in get_class_in_module module = importlib.import_module(module_path) File "importlib__init__.py", line 126, in import_module File "", line 1050, in _gcd_import File "", line 1027, in _find_and_load File "", line 1006, in _find_and_load_unlocked File "", line 688, in _load_unlocked File "", line 883, in exec_module File "", line 241, in _call_with_frames_removed File "C:\Users\amari/.cache\huggingface\modules\transformers_modules\vikhyatk\moondream1\f6e9da68e8f1b78b8f3ee10905d56826db7a5802\moondream.py", line 3, in from .vision_encoder import VisionEncoder File "C:\Users\amari/.cache\huggingface\modules\transformers_modules\vikhyatk\moondream1\f6e9da68e8f1b78b8f3ee10905d56826db7a5802\vision_encoder.py", line 5, in from torchvision.transforms.v2 import (

amarillosebas commented 7 months ago

And this is the error shown in the command prompt:

moondream: loading model vikhyatk/moondream1, please stand by.... Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. ERROR:root:!!! Exception during processing !!! ERROR:root:Traceback (most recent call last): File "F:\AI\ComfyUI\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "F:\AI\ComfyUI\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "F:\AI\ComfyUI\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "F:\AI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Hangover-Moondream\ho_moondream.py", line 56, in interrogate self.model = AutoModelForCausalLM.from_pretrained(huggingface_model, trust_remote_code=trust_remote_code).to(dev) File "F:\AI\ComfyUI\python_embeded\lib\site-packages\transformers\models\auto\auto_factory.py", line 455, in from_pretrained model_class = get_class_from_dynamic_module( File "F:\AI\ComfyUI\python_embeded\lib\site-packages\transformers\dynamic_module_utils.py", line 374, in get_class_from_dynamic_module return get_class_in_module(class_name, final_module.replace(".py", "")) File "F:\AI\ComfyUI\python_embeded\lib\site-packages\transformers\dynamic_module_utils.py", line 147, in get_class_in_module module = importlib.import_module(module_path) File "importlib__init__.py", line 126, in import_module File "", line 1050, in _gcd_import File "", line 1027, in _find_and_load File "", line 1006, in _find_and_load_unlocked File "", line 688, in _load_unlocked File "", line 883, in exec_module File "", line 241, in _call_with_frames_removed File "C:\Users\amari/.cache\huggingface\modules\transformers_modules\vikhyatk\moondream1\f6e9da68e8f1b78b8f3ee10905d56826db7a5802\moondream.py", line 3, in from .vision_encoder import VisionEncoder File "C:\Users\amari/.cache\huggingface\modules\transformers_modules\vikhyatk\moondream1\f6e9da68e8f1b78b8f3ee10905d56826db7a5802\vision_encoder.py", line 5, in from torchvision.transforms.v2 import ( ImportError: cannot import name 'ToImage' from 'torchvision.transforms.v2' (F:\AI\ComfyUI\python_embeded\lib\site-packages\torchvision\transforms\v2__init__.py)

Hangover3832 commented 7 months ago

@amarillosebas can you please check if the latest update fixes the issue? Make sure to select the moondream2 model within the node.

amarillosebas commented 7 months ago

I'm getting this error shown in ComfyUI:

Error occurred when executing Moondream Interrogator (NO COMMERCIAL USE):

This modeling file requires the following packages that were not found in your environment: flash_attn. Run pip install flash_attn

File "F:\AI\ComfyUI\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "F:\AI\ComfyUI\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "F:\AI\ComfyUI\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "F:\AI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Hangover-Moondream\ho_moondream.py", line 56, in interrogate self.model = AutoModelForCausalLM.from_pretrained(huggingface_model, trust_remote_code=trust_remote_code).to(dev) File "F:\AI\ComfyUI\python_embeded\lib\site-packages\transformers\models\auto\auto_factory.py", line 455, in from_pretrained model_class = get_class_from_dynamic_module( File "F:\AI\ComfyUI\python_embeded\lib\site-packages\transformers\dynamic_module_utils.py", line 363, in get_class_from_dynamic_module final_module = get_cached_module_file( File "F:\AI\ComfyUI\python_embeded\lib\site-packages\transformers\dynamic_module_utils.py", line 274, in get_cached_module_file get_cached_module_file( File "F:\AI\ComfyUI\python_embeded\lib\site-packages\transformers\dynamic_module_utils.py", line 237, in get_cached_module_file modules_needed = check_imports(resolved_module_file) File "F:\AI\ComfyUI\python_embeded\lib\site-packages\transformers\dynamic_module_utils.py", line 134, in check_imports raise ImportError(

Hangover3832 commented 7 months ago

Please try to install flash_attn, the easiest within the ComfyUI-Manager (Install PIP packages).

amarillosebas commented 7 months ago

That's this one, right?

Hangover3832 commented 7 months ago

Basically just type flash_attn in ComfyUI-Manager's Install PIP packages and let PIP decide. Somewhat confusing however, I do not have flash_attn installed (and obviously don't need it), do you have some command line switches that uses flash attention or similar in ComfyUI?

amarillosebas commented 7 months ago

Well, that's one of the things I've tried so far. Just not doable for some reason in my system. Errors everywhere. I might just give up.

As far as the second question, no idea. Maybe some other node uses it? Couldn't tell you. I've learned the few things I know about python by using ComfyUI and ChatGPT. There is a lot of stuff I don't understand.

amarillosebas commented 7 months ago

Just in case, this is what I get in the command prompt when installing using the way you mentioned:

[!] error: subprocess-exited-with-error [!] [!] python setup.py egg_info did not run successfully. [!] exit code: 1 [!] [!] [20 lines of output] [!] fatal: not a git repository (or any of the parent directories): .git [!] C:\Users\amari\AppData\Local\Temp\pip-install-trzl1f_w\flash-attn_9e03e260bb7b448bb9a22bcbae081cf2\setup.py:78: UserWarning: flash_attn was requested, but nvcc was not found. Are you sure your environment has nvcc available? If you're installing within a container from https://hub.docker.com/r/pytorch/pytorch, only images whose names contain 'devel' will provide nvcc. [!] warnings.warn( [!] Traceback (most recent call last): [!] File "", line 2, in [!] File "", line 34, in [!] File "C:\Users\amari\AppData\Local\Temp\pip-install-trzl1f_w\flash-attn_9e03e260bb7b448bb9a22bcbae081cf2\setup.py", line 133, in [!] CUDAExtension( [!] File "F:\AI\ComfyUI\python_embeded\lib\site-packages\torch\utils\cpp_extension.py", line 1048, in CUDAExtension [!] library_dirs += library_paths(cuda=True) [!] File "F:\AI\ComfyUI\python_embeded\lib\site-packages\torch\utils\cpp_extension.py", line 1186, in library_paths [!] paths.append(_join_cuda_home(lib_dir)) [!] File "F:\AI\ComfyUI\python_embeded\lib\site-packages\torch\utils\cpp_extension.py", line 2223, in _join_cuda_home [!] raise EnvironmentError('CUDA_HOME environment variable is not set. ' [!] OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root. [!] [!] [!] torch.version = 2.0.1+cu118 [!] [!] [!] [end of output] [!] [!] note: This error originates from a subprocess, and is likely not a problem with pip. [!] error: metadata-generation-failed

Hangover3832 commented 7 months ago

Unfortunately I cannot help you a lot here. You seem have torch 2.0 and cuda 11.8, while my ComfyUI installation uses torch 2.2 and cuda 12.1. If ConfyUI works fine for you beside this node, there are also other nodes for moondream you can try. If you encounter a lot other other issues in ComfyUI, a restart from scratch on a fresh installation can sometimes work wonders.

amarillosebas commented 7 months ago

I understand, don't worry. I don't want to take more of your time. It's funny though, because I spent a good amount of time trying to update everything, including torch, just 2 days ago. Not a fun experience, working with python LOL. And I thought game development was stressful. Oh boy was I wrong.