khoj-ai / khoj

Your AI second brain. Self-hostable. Get answers from the web or your docs. Build custom agents, schedule automations, do deep research. Turn any online or local LLM into your personal, autonomous AI (e.g gpt, claude, gemini, llama, qwen, mistral).
https://khoj.dev
GNU Affero General Public License v3.0
15.91k stars 785 forks source link

Can't work on self-host. #959

Closed YIRU69 closed 2 weeks ago

YIRU69 commented 2 weeks ago

On my windows11 system.

I use the model llama3.1:8b. Error messages are as follows.

Screenshots

image

Platform

If self-hosted

debanjum commented 2 weeks ago

Hey @YIRU69 , what is the format in which you've added the model name? And did you go through the first run experience where Khoj ask which chat models to use? It usually sets up llama3.x 8b as on one of the default models available, in which case you just have to select it on the settings page at http://localhost:42110/settings.

You need to use HuggingFace repo name format of <person/org>/ to add your models of choice. For example, to use llama-3.1 8b you can add something like bartowski/Meta-Llama-3.1-8B-Instruct-GGUF. You can find chat models and their names on HuggingFace

YIRU69 commented 2 weeks ago

Thank you, I will try it later again. But I add the model name. bartowski/Meta-Llama-3.1-8B-Instruct-GGUF It also can't work.

YIRU69 commented 2 weeks ago

Update: It should add the openai config.

I try it again. But it also can't work. As follows.

image

image

image

image

debanjum commented 2 weeks ago

Ok, there's been some confusion. Let's clear it up. There are multiple ways to use local chat models with Khoj. Based on the discussion on Discord it seems you may have been trying to setup Khoj to use a local chat model with Ollama? If so see the docs to setup Khoj with Ollama for the most accurate instructions. In general though:

For Ollama

  1. You should set the chat model name to whichever chat model you're running on ollama. E.g llama3.1:8b
  2. You should set an OpenAI Processor Config that points OpenAI server URL to your Ollama server in the ChatModelOptions
  3. You should set the ModelType to OpenAI. (This is because Ollama expose an OpenAI API compatible server that Khoj uses to interact with Ollama chat models)

For Direct

  1. You should set the chat model name to the  bartowski/Meta-Llama-3.1-8B-Instruct-GGUF
  2. You set the ModelType to Offline
  3. You do not set the OpenAI processor in the ChatModelOption as it is not used.

Khoj will download that chat model and run it directly without using any service/api like Ollama.

debanjum commented 2 weeks ago

image

Specifically this config should either set chat model field to llama3.1:8b to use with your ollama setup or set model type to offline and unset openai config field to load the model directly in Khoj but not do both

YIRU69 commented 2 weeks ago

Thank you, I will try it again.

YIRU69 commented 2 weeks ago

It worked! Thanks!