gokayfem / ComfyUI_VLM_nodes

Custom ComfyUI nodes for Vision Language Models, Large Language Models, Image to Music, Text to Music, Consistent and Random Creative Prompt Generation
Apache License 2.0
384 stars 31 forks source link

Consultation on use #35

Closed JZZZ1314 closed 7 months ago

JZZZ1314 commented 7 months ago

I would like to ask how to use gguf to optimize the prompt of the Wensheng image. In the example, the prompt words are generated for the image description (there is an example using the API, but there is no example using the local model). `Error occurred when executing LLMSampler:

exception: access violation reading 0x000001E66891B000

File "C:\comfyui\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\comfyui\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\comfyui\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "C:\comfyui\ComfyUI\custom_nodes\ComfyUI_VLM_nodes\nodes\suggest.py", line 291, in generate_text_advanced response = llm.create_chat_completion(messages=[ File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_cpp\llama.py", line 1638, in create_chat_completion return handler( File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_cpp\llama_chat_format.py", line 2006, in call llama.create_completion( File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_cpp\llama.py", line 1474, in create_completion completion: Completion = next(completion_or_chunks) # type: ignore File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_cpp\llama.py", line 1000, in _create_completion for token in self.generate( File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_cpp\llama.py", line 684, in generate token = self.sample( File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_cpp\llama.py", line 603, in sample id = sampling_context.sample(ctx_main=self._ctx, logits_array=logits) File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_cpp_internals.py", line 754, in sample ctx_main.sample_repetition_penalties( File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_cpp_internals.py", line 350, in sample_repetition_penalties llama_cpp.llama_sample_repetition_penalties(`

gokayfem commented 7 months ago

image

You can use this to fix your prompts and generate creative prompts.

Use llava-v1.6-mistral-7b.Q5_K_M.gguf this one as LLM.