This application integrates GraphRAG with AutoGen agents, powered by local LLMs from Ollama, for free and offline embedding and inference. Key highlights include:
Follow these steps to set up and run AutoGen GraphRAG Local with Ollama and Chainlit UI:
Install LLMs:
Visit Ollama's website for installation files.
ollama pull mistral
ollama pull nomic-embed-text
ollama pull llama3
ollama serve
Create conda environment and install packages:
conda create -n RAG_agents python=3.12
conda activate RAG_agents
git clone https://github.com/karthik-codex/autogen_graphRAG.git
cd autogen_graphRAG
pip install -r requirements.txt
Initiate GraphRAG root folder:
mkdir -p ./input
python -m graphrag.index --init --root .
mv ./utils/settings.yaml ./
Replace 'embedding.py' and 'openai_embeddings_llm.py' in the GraphRAG package folder using files from Utils folder:
sudo find / -name openai_embeddings_llm.py
sudo find / -name embedding.py
Create embeddings and knowledge graph:
python -m graphrag.index --root .
Start Lite-LLM proxy server:
litellm --model ollama_chat/llama3
Run app:
chainlit run appUI.py
Follow these steps to set up and run AutoGen GraphRAG Local with Ollama and Chainlit UI on Windows:
Install LLMs:
Visit Ollama's website for installation files.
ollama pull mistral
ollama pull nomic-embed-text
ollama pull llama3
ollama serve
Create conda environment and install packages:
git clone https://github.com/karthik-codex/autogen_graphRAG.git
cd autogen_graphRAG
python -m venv venv
./venv/Scripts/activate
pip install -r requirements.txt
Initiate GraphRAG root folder:
mkdir input
python -m graphrag.index --init --root .
cp ./utils/settings.yaml ./
Replace 'embedding.py' and 'openai_embeddings_llm.py' in the GraphRAG package folder using files from Utils folder:
cp ./utils/openai_embeddings_llm.py .\venv\Lib\site-packages\graphrag\llm\openai\openai_embeddings_llm.py
cp ./utils/embedding.py .\venv\Lib\site-packages\graphrag\query\llm\oai\embedding.py
Create embeddings and knowledge graph:
python -m graphrag.index --root .
Start Lite-LLM proxy server:
litellm --model ollama_chat/llama3
Run app:
chainlit run appUI.py