Nuked88 / ComfyUI-N-Nodes

A suite of custom nodes for ConfyUI that includes GPT text-prompt generation, LoadVideo, SaveVideo, LoadFramesFromFolder and FrameInterpolator
MIT License
205 stars 22 forks source link

Getting error when trying to load LLM #52

Open Pojo267 opened 7 months ago

Pojo267 commented 7 months ago

I've tried loading two different LLMs within the GPT Loader Simple node and both give me the same error and I'm not sure what the issue is.

The two LLMs attempted are... dolphin-2.5-mixtral-8x7b.Q5_K_M.gguf phi-2-layla-v1-chatml-Q8_0.gguf

Error occurred when executing GPT Loader Simple [n-suite]:

File "...\GitHub\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "...\GitHub\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "...\GitHub\ComfyUI\custom_nodes\ComfyUI-0246\utils.py", line 381, in new_func
res_value = old_func(*final_args, **kwargs)
File "...\GitHub\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "...\GitHub\ComfyUI\custom_nodes\ComfyUI-N-Nodes\py\gptcpp_node.py", line 388, in load_gpt_checkpoint
llm = Llama(model_path=ckpt_path, n_gpu_layers=gpu_layers, verbose=False, n_threads=n_threads, n_ctx=max_ctx)
File "...\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_cpp\llama.py", line 923, in __init__
self._n_vocab = self.n_vocab()
File "...\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_cpp\llama.py", line 2184, in n_vocab
return self._model.n_vocab()
File "...\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_cpp\llama.py", line 250, in n_vocab
assert self.model is not None