if-ai / ComfyUI-IF_AI_tools

ComfyUI-IF_AI_tools is a set of custom nodes for ComfyUI that allows you to generate prompts using a local Large Language Model (LLM) via Ollama. This tool enables you to enhance your image generation workflow by leveraging the power of language models.
https://ko-fi.com/impactframes
MIT License
509 stars 39 forks source link

ValueError: Invalid model selected: for engine ollama. Available models: #35

Closed apple-mark closed 5 months ago

apple-mark commented 5 months ago

Error occurred when executing IF_ChatPrompt:

Invalid model selected: for engine ollama. Available models: []

File "F:\BaiduNetdiskDownload\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "F:\BaiduNetdiskDownload\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "F:\BaiduNetdiskDownload\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "F:\BaiduNetdiskDownload\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-IF_AI_tools\IFChatPromptNode.py", line 256, in describe_picture raise ValueError(error_message)

apple-mark commented 5 months ago

image

ThomasRoyer24 commented 5 months ago

Error occurred when executing IF_ChatPrompt:

Invalid model selected: for engine ollama. Available models: []

File "F:\BaiduNetdiskDownload\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "F:\BaiduNetdiskDownload\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "F:\BaiduNetdiskDownload\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "F:\BaiduNetdiskDownload\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-IF_AI_tools\IFChatPromptNode.py", line 256, in describe_picture raise ValueError(error_message)

For me, the problem comes from the fact that you haven't selected a model. You need to download a model from ollama, for example: ollama run mistral. Then use it on comfyui in selected_model in IF Chat Prompt.

if-ai commented 5 months ago

yes as @ThomasRoyer24 said you need to install ollama or use any of the other engines on the repo there are YT video guides on how to do it. but essentially you go to https://ollama.com/ download and install ollama for your system then open terminal or command prompt and typeollama run impactframes/llama3_ifai_sd_prompt_mkr_q4km this will install the main recommended model to work with SD prompts there are several great models you can use in your computer for working with images you can useollama run llava-llama3 to install after that restart comfy the models should appear in the dropdown menu