Closed reniv1 closed 5 months ago
Seconding this. Been trying to get koboldcpp to work for a while now with this node. Can't get anything usable out of it.
Only the chat prompt node has a kobold backend as option and none of the outputs provided by the node actually carry the messages coming from koboldcpp despite my best efforts.
Something seems broken in the api implementation or better instructions needed for how this is supposed to work.
Sorry I am fixing it now I just got home I had an issue with the images not sure what is happening it was working before I am working on it
Koboldcpp says that they have support for image messages in their API but I tried everything with both the open ai endpoint and the native endpoint and it does not pick the image maybe they have support images but only with as particular model. for now only text seems to be working please try again after updating
Thanks for your help, It works well. The text was what i planned on using for just adding a bit of randomness to prompts.
Thank you. I hope they improve the vision model support api. Is the same with oobabooga they only support some old vision models which restricts what I could do.
I am having trouble getting usable if any text to come out of the response pin, the question pin comes out as expected tho. The llm back end receives the question, but nothing is coming out. I am using the If chat prompt node. My workflow is just a if chat prompt connected to a show text in the 4 pins. I am on a windows os computer.
I have tried changing the stop, using <|end_of_text|>, < /s>, . , and <|eot_id|>. I also changed the assistant to MKR, and the model to llama3_if_ai_sdpromptmkr_q4km. I have tried a few other models also, mostly llama 3 or 2 models. Besides that I left everything default besides the port numbers. Is there anyway to just get the raw text out? I tried flipping the text cleanup, but nothing happened. Also changed assistant to none. I have nothing going into the image or context pin, just a command in the prompt box. Thanks for your help.
Below is the output from Koboldcpp back end terminal, first one being MKR with the llama 3_if_ai model, second being none as the assistant and a llama 3 model. I had similar results with ollama, but it wasn't showing anything in the terminal.