Integrate Microsoft's GraphRAG Technology into Open WebUI for Advanced Information Retrieval
English | 简体中文GraphRAG4OpenWebUI is an API interface specifically designed for Open WebUI, aiming to integrate Microsoft Research's GraphRAG (Graph-based Retrieval-Augmented Generation) technology. This project provides a powerful information retrieval system that supports multiple search models, particularly suitable for use in open web user interfaces.
The main goal of this project is to provide a convenient interface for Open WebUI to leverage the powerful features of GraphRAG. It integrates three main retrieval methods and offers a comprehensive search option, allowing users to obtain thorough and precise search results.
Local Search
Global Search
Tavily Search
Full Model Search
GraphRAG4OpenWebUI now supports the use of local language models (LLMs) and embedding models, increasing the project's flexibility and privacy. Specifically, we support the following local models:
Ollama
API_BASE
environment variable to point to Ollama's API endpointLM Studio
API_BASE
environment variableLocal Embedding Models
GRAPHRAG_EMBEDDING_MODEL
environment variableThis support for local models allows GraphRAG4OpenWebUI to run without relying on external APIs, enhancing data privacy and reducing usage costs.
Ensure that you have Python 3.8 or higher installed on your system. Then, follow these steps to install:
Clone the repository:
git clone https://github.com/your-username/GraphRAG4OpenWebUI.git
cd GraphRAG4OpenWebUI
Create and activate a virtual environment:
python -m venv venv
source venv/bin/activate # On Windows use venv\Scripts\activate
Install dependencies:
pip install -r requirements.txt
Note: The graphrag package might need to be installed from a specific source. If the above command fails to install graphrag, please refer to Microsoft Research's specific instructions or contact the maintainer for the correct installation method.
Before running the API, you need to set the following environment variables. You can do this by creating a .env
file or exporting them directly in your terminal:
# Set the TAVILY API key
export TAVILY_API_KEY="your_tavily_api_key"
export INPUT_DIR="/path/to/your/input/directory"
# Set the API key for LLM
export GRAPHRAG_API_KEY="your_actual_api_key_here"
# Set the API key for embedding (if different from GRAPHRAG_API_KEY)
export GRAPHRAG_API_KEY_EMBEDDING="your_embedding_api_key_here"
# Set the LLM model
export GRAPHRAG_LLM_MODEL="gemma2"
# Set the API base URL
export API_BASE="http://localhost:11434/v1"
# Set the embedding API base URL (default is OpenAI's API)
export API_BASE_EMBEDDING="https://api.openai.com/v1"
# Set the embedding model (default is "text-embedding-3-small")
export GRAPHRAG_EMBEDDING_MODEL="text-embedding-3-small"
Make sure to replace the placeholders in the above commands with your actual API keys and paths.
Start the server:
python main-en.py
The server will run on http://localhost:8012
.
API Endpoints:
/v1/chat/completions
: POST request for performing searches/v1/models
: GET request to retrieve the list of available modelsIntegration with Open WebUI:
In the Open WebUI configuration, set the API endpoint to http://localhost:8012/v1/chat/completions
. This will allow Open WebUI to use the search functionality of GraphRAG4OpenWebUI.
Example search request:
import requests
import json
url = "http://localhost:8012/v1/chat/completions"
headers = {"Content-Type": "application/json"}
data = {
"model": "full-model:latest",
"messages": [{"role": "user", "content": "Your search query"}],
"temperature": 0.7
}
response = requests.post(url, headers=headers, data=json.dumps(data))
print(response.json())
graphrag-local-search:latest
: Local searchgraphrag-global-search:latest
: Global searchtavily-search:latest
: Tavily searchfull-model:latest
: Comprehensive search (includes all search methods above)INPUT_DIR
directory.Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.