QuivrHQ / quivr

Opiniated RAG for integrating GenAI in your apps 🧠 Focus on your product rather than the RAG. Easy integration in existing products with customisation! Any LLM: GPT4, Groq, Llama. Any Vectorstore: PGVector, Faiss. Any Files. Anyway you want.
https://core.quivr.com
Other
36.71k stars 3.59k forks source link

[Bug]: How can i config the Quivr to use local Ollama model ? i can't find the config example and no env.example file #3457

Open lijinxinKing opened 2 weeks ago

lijinxinKing commented 2 weeks ago

What happened?

How can i config the Quivr to use local Ollama model ? i can't find the config example and no env.example file

Relevant log output

How can i config the Quivr to use local Ollama model ? i can't find the config example and no env.example file

Twitter / LinkedIn details

No response

linear[bot] commented 2 weeks ago

CORE-285 [Bug]: How can i config the Quivr to use local Ollama model ? i can't find the config example and no env.example file

dosubot[bot] commented 2 weeks ago

Hey @lijinxinKing! I'm here to help you with any bugs, questions, or contributions. Let's dive into this together!

To configure Quivr to use a local Ollama model, follow these steps:

  1. Install Ollama: Download and install Ollama from their website.

  2. Run Ollama in the Background: Use the command below to start Ollama:

    ollama run llama2
  3. Run Migrations: Move the migration file and reset the database:

    mv supabase/migrations/local_20240107152745_ollama.sql supabase/migrations/20240107152745_ollama.sql
    supabase db reset
  4. Add Ollama Model to Quivr: Update the user_settings table in Supabase to include the Ollama models:

    [
     "ollama/llama2",
     "ollama/mistral"
    ]

    If you want to add it to your user only, you can add it to the models column in the user_settings table. If you want to set it as default for all new users, drop the entire table with the following command:

    DELETE TABLE user_settings;
  5. Configure .env File: Update your .env file to activate the Ollama API by setting the OLLAMA_API_BASE_URL:

    # LOCAL
    OLLAMA_API_BASE_URL=http://host.docker.internal:11434
  6. Integrate with Quivr: Ensure that the QuivrRAG class in quivr_rag.py is configured to use the Ollama model. The ChatLiteLLM class should be instantiated with the api_base parameter set to the OLLAMA_API_BASE_URL if the model starts with "ollama". This is handled in the _create_llm method of the QuivrRAG class.

Additionally, you may need to update the embedding dimension in .env.example to match the model's requirements and modify certain files to use ChatOllama instead of ChatOpenAI. Rebuild the image and supabase after making these changes [1][2].

To continue talking to Dosu, mention @dosu.

wonderu commented 2 weeks ago

your @dosubot is useless and harmful! It proposes to delete table.