Completely local RAG with chat UI
Check out the RagBase on Streamlit Cloud. Runs with Groq API.
Clone the repo:
git clone git@github.com:curiousily/ragbase.git
cd ragbase
Install the dependencies (requires Poetry):
poetry install
Fetch your LLM (gemma2:9b by default):
ollama pull gemma2:9b
Run the Ollama server
ollama serve
Start RagBase:
poetry run streamlit run app.py
Extracts text from PDF documents and creates chunks (using semantic and character splitter) that are stored in a vector databse
Given a query, searches for similar documents, reranks the result and applies LLM chain filter before returning the response.
Combines the LLM with the retriever to answer a given user question
You can also use the Groq API to replace the local LLM, for that you'll need a .env
file with Groq API key:
GROQ_API_KEY=YOUR API KEY