fill in ollama to LLM_PROVIDER in .env.ai, and these are default models used for Ollama, so you don't need to setup GENERATION_MODEL, EMBEDDING_MODEL, EMBEDDING_MODEL_DIMENSION. However, if you would like to try other LLMs, then you need to fill in these environment variables
minor modification after merging from community's contribution to support 3rd party Open AI APIs: https://github.com/Canner/WrenAI/pull/365
steps to test custom llm:
ollama pull llama3:8b
ollama pull nomic-embed-text
ollama
toLLM_PROVIDER
in.env.ai
, and these are default models used for Ollama, so you don't need to setupGENERATION_MODEL
,EMBEDDING_MODEL
,EMBEDDING_MODEL_DIMENSION
. However, if you would like to try other LLMs, then you need to fill in these environment variables