Closed asim closed 3 hours ago
Not using local model but works well with gpt 4o mini
Example below. Note, we are passing the context which is a vector search query from the quran, hadith and names of allah index we built. So this will look to maintain far higher accuracy than passing a question straight to chatgpt and getting info from anywhere on the internet.
Using the RAG example from chromem-go, we can pass the index results from our pre-generated index as a context when asking an LLM questions. This would be useful if we wanted to run a small local LLM rather than using OpenAI.
https://github.com/philippgille/chromem-go/tree/main/examples/rag-wikipedia-ollama