Closed chrisbward closed 7 months ago
I suggest to use localAI and use a custom LLM. Then connect LinGoose to LocalAI using a custom openai client (WithClient( )
) with local endpoint
@henomis Can you comment on why localAI and not Ollama?
nvm, i see. it means you don't have to do any work.
shame, because Ollama presents much nicer development ergonomics, specifically it's similarity to docker:
Dockerfile
👉🏻 Modelfile
docker build ...
👉🏻 ollama build ...
@airtonix I will check this project and the possibility of integrating it into Lingoose. Thanks for the suggestion.
Ollama will be supported in the next lingoose version.
As titled, would prefer to use a local LLM instead of OpenAI's GPT. I arrived here via this tutorial/introduction to RAG;
https://simonevellei.com/blog/posts/leveraging-go-and-redis-for-efficient-retrieval-augmented-generation/