Local RAG pipeline we're going to build:
All designed to run locally on a NVIDIA GPU.
All the way from PDF ingestion to "chat with PDF" style features.
All using open-source tools.
In our specific example, we'll build NutriChat, a RAG workflow that allows a person to query a 1200 page PDF version of a Nutrition Textbook and have an LLM generate responses back to the query based on passages of text from the textbook.
PDF source: https://pressbooks.oer.hawaii.edu/humannutrition2/
You can also run notebook 00-simple-local-rag.ipynb
directly in Google Colab.
TODO:
Two main options:
Note: Tested in Python 3.11, running on Windows 11 with a NVIDIA RTX 4090 with CUDA 12.1.
git clone https://github.com/mrdbourke/simple-local-rag.git
cd simple-local-rag
python -m venv venv
Linux/macOS:
source venv/bin/activate
Windows:
.\venv\Scripts\activate
pip install -r requirements.txt
Note: I found I had to install torch
manually (torch
2.1.1+ is required for newer versions of attention for faster inference) with CUDA, see: https://pytorch.org/get-started/locally/
On Windows I used:
pip3 install -U torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
VS Code:
code .
Jupyter Notebook
jupyter notebook
Setup notes:
login()
function. Once you've done this, you'll be able to download the models. If you're using Google Colab, you can add a Hugging Face token to the "Secrets" tab.flash-attn
in the requirements.txt due to compile time, feel free to uncomment if you'd like use it or run pip install flash-attn
.RAG stands for Retrieval Augmented Generation.
It was introduced in the paper Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks.
Each step can be roughly broken down to:
The main goal of RAG is to improve the generation outptus of LLMs.
Two primary improvements can be seen as:
The authors of the original RAG paper mentioned above outlined these two points in their discussion.
This work offers several positive societal benefits over previous work: the fact that it is more strongly grounded in real factual knowledge (in this case Wikipedia) makes it “hallucinate” less with generations that are more factual, and offers more control and interpretability. RAG could be employed in a wide variety of scenarios with direct benefit to society, for example by endowing it with a medical index and asking it open-domain questions on that topic, or by helping people be more effective at their jobs.
RAG can also be a much quicker solution to implement than fine-tuning an LLM on specific data.
RAG can help anywhere there is a specific set of information that an LLM may not have in its training data (e.g. anything not publicly accessible on the internet).
For example you could use RAG for:
All of these have the common theme of retrieving relevant resources and then presenting them in an understandable way using an LLM.
From this angle, you can consider an LLM a calculator for words.
Privacy, speed, cost.
Running locally means you use your own hardware.
From a privacy standpoint, this means you don't have send potentially sensitive data to an API.
From a speed standpoint, it means you won't necessarily have to wait for an API queue or downtime, if your hardware is running, the pipeline can run.
And from a cost standpoint, running on your own hardware often has a heavier starting cost but little to no costs after that.
Performance wise, LLM APIs may still perform better than an open-source model running locally on general tasks but there are more and more examples appearing of smaller, focused models outperforming larger models.
Term | Description |
---|---|
Token | A sub-word piece of text. For example, "hello, world!" could be split into ["hello", ",", "world", "!"]. A token can be a whole word, part of a word or group of punctuation characters. 1 token ~= 4 characters in English, 100 tokens ~= 75 words. Text gets broken into tokens before being passed to an LLM. |
Embedding | A learned numerical representation of a piece of data. For example, a sentence of text could be represented by a vector with 768 values. Similar pieces of text (in meaning) will ideally have similar values. |
Embedding model | A model designed to accept input data and output a numerical representation. For example, a text embedding model may take in 384 tokens of text and turn it into a vector of size 768. An embedding model can and often is different to an LLM model. |
Similarity search/vector search | Similarity search/vector search aims to find two vectors which are close together in high-demensional space. For example, two pieces of similar text passed through an embedding model should have a high similarity score, whereas two pieces of text about different topics will have a lower similarity score. Common similarity score measures are dot product and cosine similarity. |
Large Language Model (LLM) | A model which has been trained to numerically represent the patterns in text. A generative LLM will continue a sequence when given a sequence. For example, given a sequence of the text "hello, world!", a genertive LLM may produce "we're going to build a RAG pipeline today!". This generation will be highly dependant on the training data and prompt. |
LLM context window | The number of tokens a LLM can accept as input. For example, as of March 2024, GPT-4 has a default context window of 32k tokens (about 96 pages of text) but can go up to 128k if needed. A recent open-source LLM from Google, Gemma (March 2024) has a context window of 8,192 tokens (about 24 pages of text). A higher context window means an LLM can accept more relevant information to assist with a query. For example, in a RAG pipeline, if a model has a larger context window, it can accept more reference items from the retrieval system to aid with its generation. |
Prompt | A common term for describing the input to a generative LLM. The idea of "prompt engineering" is to structure a text-based (or potentially image-based as well) input to a generative LLM in a specific way so that the generated output is ideal. This technique is possible because of a LLMs capacity for in-context learning, as in, it is able to use its representation of language to breakdown the prompt and recognize what a suitable output may be (note: the output of LLMs is probable, so terms like "may output" are used). |
Coming soon.