Closed HakaishinShwet closed 3 months ago
Hi @HakaishinShwet ! We had added support for running with your own custom OpenAI-compatible server, which does include Ollama! I need to update the documentation to explain this, but essentially you can follow these instructions: https://docs.khoj.dev/get-started/setup#configure-openai-or-a-custom-openai-compatible-proxy-server.
You'll want to set the URL of your Ollama server (typically http://localhost:11434/) at the OpenAI settings' base URL, and it should work directly.
I'll update the instructions to make this more clear, as using Ollama is definitely a common scenario.
@sabaimran thanks will try it out
Hey @HakaishinShwet ! I've tested this out and verified it works. Here are some more specific instructions: https://docs.khoj.dev/miscellaneous/ollama.
Excuse me Connection ollama whether need to configure the OPENAI_API_BASE I'll according to the practice of document configuration Don't work Find interface : api/chat? q=1&n=5&client=web&stream=true&conversation_id=8®ion=null&city=null&country=null&timezone=null 500 Server Error is reported NotFoundError: 404 page not found
Sending input content using the default method will appear. WebSocket is closed now. How do I properly connect to local ollama? Other products can connect to ollama properly Name:Name: ollama Api key: any string Api base url: Currently: http://host.docker.internal:11434/ Change: http://host.docker.internal:11434/ Max prompt size: 1000 Tokenizer: Chat model: llama3:latest Model type: openai
@mingLvft i am facing same issue of Websocket @sabaimran any guidance on this?
i followed same steps but it shows disconnected idk why when i try to connect with ollama my other ollama web services working fine idk what am i missing out,tested alot maybe small video on it might guide better
Ollama is pretty famous and running local model through it is easy too. So adding default support for it will be great. I did read similar issue which was closed so reopening here.Plus wanted to know as I am using ollama locally so how can I integrate with khoj if u still don't tend to support ollama ? Asking this so that i don't have to re download models and test and try many more things and is litellm helpful in these scenario because i have experience of it to run local proxy server and connect with open ai supported api