Open YIRU69 opened 1 day ago
Hey @YIRU69 , what is the format in which you've added the model name? And did you go through the first run experience where Khoj ask which chat models to use? It usually sets up llama3.x 8b as on one of the default models available, in which case you just have to select it on the settings page at http://localhost:42110/settings.
You need to use HuggingFace repo name format of <person/org>/
Thank you, I will try it later again. But I add the model name. bartowski/Meta-Llama-3.1-8B-Instruct-GGUF It also can't work.
Update: It should add the openai config.
I try it again. But it also can't work. As follows.
Ok, there's been some confusion. Let's clear it up. There are multiple ways to use local chat models with Khoj. Based on the discussion on Discord it seems you may have been trying to setup Khoj to use a local chat model with Ollama? If so see the docs to setup Khoj with Ollama for the most accurate instructions. In general though:
llama3.1:8b
bartowski/Meta-Llama-3.1-8B-Instruct-GGUF
Offline
Khoj will download that chat model and run it directly without using any service/api like Ollama.
Specifically this config should either set chat model field to llama3.1:8b
to use with your ollama setup or set model type to offline and unset openai config field to load the model directly in Khoj but not do both
On my windows11 system.
I use the model llama3.1:8b. Error messages are as follows.
Screenshots
Platform
If self-hosted