Closed ptgoetz closed 5 months ago
@mkorpela and @ptgoetz, it seems that the retrieval has been move from local postgres to Azure (and embedding model as well). I kind of like the idea to be able to run everything locally, is it not better to be able to choose? Furthermore, maybe not every user want to set up an Azure account and such.
I misread the code, seems to choose the embedding model only. I have both OPENAI_API_KEY and AZURE_OPENAI_API_KEY in my environment so that I can choose the llm "supplier". In this case it will always choose the OpenAI embedding no matter what LLM I choose from OpenAI or Azure. Should the embedding model also be configurable? E.g.:
What?
Add an option to use Ollama LLMs for agents
New "Ollama" optyion:
Bot created with the Ollama LLM:
Test Llama2 LLM:
Test OpenChat LLM:
For now, it only supports one Ollama Model. Future Pull Requests will address this shortcoming.
Configuration is currently driven by environment variables:
I will follow up with PRs that model configuration multi-model, and more dynamic.