Closed 8HoLoN closed 5 months ago
This is actually not a bug. Basically, you need two things to run a RAG stack -
In your case, you are using Ollama as the LLM and by default (unless you specify otherwise), the library uses OpenAI's LargeEmedding as its embedding model.
The error you see is coming from the large embedding model unable to reach out to OpenAI. Right now, there is no support for local embedding models via Ollama (only Ollama based LLMs are supported) but there is a plan to add it soon. There are other embedding models to choose from though; refer the documentation on that.
the embedding model is .setEmbeddingModel(new AdaEmbeddings())
Is there a pair of LLM/Embedding model that do not require an api key at all ?
Edit: ok I read all embedding-models and there is no embedding model that do not require an api key.
Thx. Let me know when embedJs will provide a full RAG stack without the need of a api key/fully local :)
Will do.
Currently the roadmap includes adding support for Ollama based local embedding models but it is not expected to not be available before Q3 start. If you are interested, you can contribute a PR with a non API Key based embedded model and I will prioritize merging it.
Hi, The usage of the local Ollama LLM should not use the open ai or azure openai LLM But currently the error
Error: OpenAI or Azure OpenAI API key or Token Provider not found
is thrown while attempting to run the Ollama example.