kesamet / retrieval-augmented-generation

Retrieval augmented generation demos with open-source Llama-3.1, 3, 2 / Phi-3 / Mistral / Zephyr / Gemma
Apache License 2.0
38 stars 7 forks source link

Retrieval augmented generation with quantized LLM

Retrieval augmented generation (RAG) demos with Mistral, Zephyr, Phi-3, Gemma-2, Llama-3, Llama-3.1

The demos use quantized models and run on CPU with acceptable inference time. They can run offline without Internet access, thus allowing deployment in an air-gapped environment.

The demos also allow user to

🔧 Getting Started

You will need to set up your development environment using conda, which you can install directly.

conda env create --name rag python=3.11
conda activate rag
pip install -r requirements.txt

We shall use unstructured to process PDFs. Refer to nstallation Instructions for Local Development.

You would also need to download punkt_tab and averaged_perceptron_tagger_eng from nltk.

import nltk
nltk.download('punkt_tab')
nltk.download('averaged_perceptron_tagger_eng')

Note that we shall only use strategy="fast" in this demo. WIP for extraction of tables from PDFs.

Activate the environment.

conda activate rag

🧠 Use different LLMs

Using a different LLM might lead to poor responses and even fail to output a response. It will require testing, prompt engineering and code refactoring.

Download and save the models in ./models and update config.yaml. The models used in this demo are:

The LLMs can be loaded directly in the app, or they can be first deployed with Ollama server.

You can also choose to use models from Groq. Set GROQ_API_KEY in .env.

Add prompt format

Since each model type has its own prompt format, include the format in ./src/prompt_templates.py. For example, the format used in openbuddy models is

"""{system}
User: {user}
Assistant:"""

🤖 Tracing

We shall use Phoenix for LLM tracing. Phoenix is an open-source observability library designed for experimentation, evaluation, and troubleshooting. Before running the app, start a phoenix server

python3 -m phoenix.server.main serve

The traces can be viewed at http://localhost:6006.

💻 App

We use Streamlit as the interface for the demos. There are three demos:

streamlit run app_conv.py
streamlit run app_qa.py

NOTE: This works for larger models like mixtral-8x7b-32768 that can handle reasoning tasks and tool calls. Smaller 7b models do not seem to work.

Create vectorstore first and update config.yaml

python -m vectorize --filepaths <your-filepath>

Run the app

streamlit run app_react.py

🔍 Usage

To get started, upload a PDF and click on Build VectorDB. Creating vector DB will take a while.

screenshot