Warning It is recommended that you have access to GPT-4 via the OpenAI API. GPT-3.5 will probably fail to make correct knowledge graphs from your data. Since we still don't have access to GPT-4 OpenAI API, although we made our account a month ago and generated >1$ in billing a week ago, the
init_repo
,update_file
andadd_file
endpoints are still untested. We initialized knowledge graphs manually, through ChatGPT. Here be dragons.
You can install and setup BOR and Memgraph using Docker or by running it manually.
Before you start, make sure you have a running Docker instance and Docker compose installed.
Download BOR
git clone https://github.com/memgraph/bor.git
cd bor
You will need to set the OPENAI_API_KEY
environment variable in a .env
file in the BOR root directory to your OpenAI API key. It should look like this:
OPENAI_API_KEY=YOUR_API_KEY
LLM_MODEL_NAME=gpt-4 # try with other models at your own risk
Where YOUR_API_KEY is a key you can get here.
docker compose up
The installation process can take up to ten minutes. After successful installation, you can proceed to set up your frontend - ODIN or RUNE.
For the manual installation make sure you have conda and Python installed on your system.
Download BOR
git clone https://github.com/memgraph/bor.git
cd bor
Create a new conda virtual environment using Python 3.9.16:
conda create --name bor_env python=3.9.16
Activate the environment:
source activate bor_env
To install all dependencies and setup all packages, run:
pip install -e .
This process might take a few minutes.
You will need a .env
file with your OpenAI API key. It should look like this:
OPENAI_API_KEY=YOUR_API_KEY
MEMGRAPH_HOST="127.0.0.1"
MEMGRAPH_PORT=7687
CHROMA_DATA_DIR="/path/to/dir/chroma"
CHROMA_VECTOR_SPACE="cosine"
EMBEDDING_MODEL_NAME="text-embedding-ada-002"
LLM_MODEL_NAME="gpt-3.5-turbo-0613"
LLM_MODEL_TEMPERATURE=0.2
Where YOUR_API_KEY is your API key you can get here. You can replace "/path/to/dir/chroma" with preferred path to an empty folder where BOR will store all embedding search data.
If you don't have Memgraph installed, you can run:
bash core/run_memgraph_290.sh
Start the FastAPI backend by running:
bash core/run_server.sh
Alternatively, you can just run the script directly in your conda environment:
uvicorn core.restapi.api:app --reload
After successful initialization, you can proceed to set up your frontend - ODIN or RUNE.
When BOR is running, you can access the endpoint documentation at http://localhost:8000/docs#/