Open lijinxinKing opened 2 weeks ago
Hey @lijinxinKing! I'm here to help you with any bugs, questions, or contributions. Let's dive into this together!
To configure Quivr to use a local Ollama model, follow these steps:
Install Ollama: Download and install Ollama from their website.
Run Ollama in the Background: Use the command below to start Ollama:
ollama run llama2
Run Migrations: Move the migration file and reset the database:
mv supabase/migrations/local_20240107152745_ollama.sql supabase/migrations/20240107152745_ollama.sql
supabase db reset
Add Ollama Model to Quivr: Update the user_settings
table in Supabase to include the Ollama models:
[
"ollama/llama2",
"ollama/mistral"
]
If you want to add it to your user only, you can add it to the models
column in the user_settings
table. If you want to set it as default for all new users, drop the entire table with the following command:
DELETE TABLE user_settings;
Configure .env File: Update your .env
file to activate the Ollama API by setting the OLLAMA_API_BASE_URL
:
# LOCAL
OLLAMA_API_BASE_URL=http://host.docker.internal:11434
Integrate with Quivr: Ensure that the QuivrRAG
class in quivr_rag.py
is configured to use the Ollama model. The ChatLiteLLM
class should be instantiated with the api_base
parameter set to the OLLAMA_API_BASE_URL
if the model starts with "ollama". This is handled in the _create_llm
method of the QuivrRAG
class.
Additionally, you may need to update the embedding dimension in .env.example
to match the model's requirements and modify certain files to use ChatOllama
instead of ChatOpenAI
. Rebuild the image and supabase after making these changes [1][2].
To continue talking to Dosu, mention @dosu.
your @dosubot is useless and harmful! It proposes to delete table.
What happened?
How can i config the Quivr to use local Ollama model ? i can't find the config example and no env.example file
Relevant log output
Twitter / LinkedIn details
No response