Open marc2608 opened 1 month ago
thats due to the LocalHost you set from the setting in the webui itself it has to be the same like just set it as https:// LocalHost 1234/v1 or use whatever you have it personally set dont add anything else behind the v1 otherwise you will get errors&Warnings like dont add /Chat/Completions to the end of it just keep it as the https://localhost:(Number)/V1 just make sure the LocalHost is not the same as the webui port then everything should work just fine but still leave it to just https://localhost:(Number)/V1 or what ever you have it set to without adding anything else behind the V1
Ok Thank you. could you say me what I should change in this config, and where?
you have it mostly Correct but in ollama itself you have to change the port to 11434 that should make it work as intended but for the webui port itself if you already havent id suggest using A1111 through Stability matrix instead it makes changing settings much easier the webui port itself cannot be 11434
ollama run llama3.2
PS. a sheep on sys tray, that is not load a model. u need load model by command. change default llama3.1 to llama3.2 (depence what model u load), same as @LadyFlames say.
Thanks to your advice my problem is solved, it works perfectly, many thanks to @LadyFlames and @xlinx!
anytime
Hello, I can't get llm-text to work with ollama. Can I have some explanations on how to configure the setup exactly, for example about the API key etc... Thank you in advance. I have Ollama running on my PC while A1111 is running, with llama 3.1 engaged, the civitai meta grabber works fine, llm-text also works with the setup configured for openai with the API key, but I can't get it to work with Ollama... In the llm answer window I still get this message: [Auto-LLM][Result][Missing LLM-Text]'choices' Thank you very much in advance