langchain-ai / opengpts

MIT License
6.44k stars 852 forks source link

initial implementation of Ollama LLM #276

Closed ptgoetz closed 5 months ago

ptgoetz commented 5 months ago

What?

Add an option to use Ollama LLMs for agents

New "Ollama" optyion:

Screenshot 2024-04-06 at 7 23 41 PM

Bot created with the Ollama LLM:

Screenshot 2024-04-06 at 7 24 04 PM

Test Llama2 LLM:

Screenshot 2024-04-06 at 7 59 21 PM

Test OpenChat LLM:

Screenshot 2024-04-06 at 8 26 10 PM

For now, it only supports one Ollama Model. Future Pull Requests will address this shortcoming.

Configuration is currently driven by environment variables:

 export OLLAMA_MODEL=openchat
 export OLLAMA_BASE_URL=http://localhost:11434/

I will follow up with PRs that model configuration multi-model, and more dynamic.

weipienlee commented 5 months ago

@mkorpela and @ptgoetz, it seems that the retrieval has been move from local postgres to Azure (and embedding model as well). I kind of like the idea to be able to run everything locally, is it not better to be able to choose? Furthermore, maybe not every user want to set up an Azure account and such.

weipienlee commented 5 months ago

I misread the code, seems to choose the embedding model only. I have both OPENAI_API_KEY and AZURE_OPENAI_API_KEY in my environment so that I can choose the llm "supplier". In this case it will always choose the OpenAI embedding no matter what LLM I choose from OpenAI or Azure. Should the embedding model also be configurable? E.g.: