khoj-ai / khoj

Your AI second brain. Get answers to your questions, whether they be online or in your own notes. Use online AI models (e.g gpt4) or private, local LLMs (e.g llama3). Self-host locally or use our cloud instance. Access from Obsidian, Emacs, Desktop app, Web or Whatsapp.
https://khoj.dev
GNU Affero General Public License v3.0
12.63k stars 640 forks source link

Default ollama support? #740

Closed HakaishinShwet closed 3 months ago

HakaishinShwet commented 4 months ago

Ollama is pretty famous and running local model through it is easy too. So adding default support for it will be great. I did read similar issue which was closed so reopening here.Plus wanted to know as I am using ollama locally so how can I integrate with khoj if u still don't tend to support ollama ? Asking this so that i don't have to re download models and test and try many more things and is litellm helpful in these scenario because i have experience of it to run local proxy server and connect with open ai supported api

sabaimran commented 4 months ago

Hi @HakaishinShwet ! We had added support for running with your own custom OpenAI-compatible server, which does include Ollama! I need to update the documentation to explain this, but essentially you can follow these instructions: https://docs.khoj.dev/get-started/setup#configure-openai-or-a-custom-openai-compatible-proxy-server.

You'll want to set the URL of your Ollama server (typically http://localhost:11434/) at the OpenAI settings' base URL, and it should work directly.

I'll update the instructions to make this more clear, as using Ollama is definitely a common scenario.

HakaishinShwet commented 4 months ago

@sabaimran thanks will try it out

sabaimran commented 3 months ago

Hey @HakaishinShwet ! I've tested this out and verified it works. Here are some more specific instructions: https://docs.khoj.dev/miscellaneous/ollama.

mingLvft commented 3 months ago

Excuse me Connection ollama whether need to configure the OPENAI_API_BASE I'll according to the practice of document configuration Don't work Find interface : api/chat? q=1&n=5&client=web&stream=true&conversation_id=8&region=null&city=null&country=null&timezone=null 500 Server Error is reported NotFoundError: 404 page not found

mingLvft commented 3 months ago

Sending input content using the default method will appear. WebSocket is closed now. How do I properly connect to local ollama? Other products can connect to ollama properly Name:Name: ollama Api key: any string Api base url: Currently: http://host.docker.internal:11434/ Change: http://host.docker.internal:11434/ Max prompt size: 1000 Tokenizer: Chat model: llama3:latest Model type: openai

HakaishinShwet commented 3 months ago

@mingLvft i am facing same issue of Websocket @sabaimran any guidance on this?

HakaishinShwet commented 3 months ago

i followed same steps but it shows disconnected idk why when i try to connect with ollama my other ollama web services working fine idk what am i missing out,tested alot maybe small video on it might guide better