gusye1234 / nano-graphrag

A simple, easy-to-hack GraphRAG implementation
MIT License
1.28k stars 123 forks source link

Add azure openai as an option in _llm.py #31

Closed SliverBulle closed 1 month ago

SliverBulle commented 1 month ago

Description

Thanks a lot for sharing this light accomplishment version of graphrag with the support of incremental insert.

When you have an AZURE_OPENAI_API_KEY instead of OPENAI_API_KEY like me, you can use this PR to handle it. Mimicking the author's writeup, I wrote the asynchronous call to azure openai. This pull request introduces an azure openai option to LLM and embedding, allowing users who have azure openai key instead of openai key.

Checklist

How to use

demos are like, I use load_dotenv to load environment variables.

from nano_graphrag import GraphRAG, QueryParam
from dotenv import load_dotenv

load_dotenv()

graph_func = GraphRAG(working_dir="./dickens")

with open("./book.txt") as f:
    graph_func.insert(f.read())

# Perform global graphrag search
print(graph_func.query("What are the top themes in this story?"))

# Perform local graphrag search (I think is better and more scalable one)
print(graph_func.query("What are the top themes in this story?", param=QueryParam(mode="local")))

you can add an .env file like in your demo.py

API_KEY_EMB = "<your azure openai key for embedding>"
AZURE_ENDPOINT_EMB = "<your azure openai endpoint for embedding>"
API_VERSION_EMB ="<api version>"

AZURE_OPENAI_API_KEY="<your azure openai key for embedding>"
AZURE_OPENAI_ENDPOINT="<AZURE_OPENAI_ENDPOINT>"
OPENAI_API_VERSION="<OPENAI_API_VERSION>"

or run export API_KEY_EMB = "<your azure openai key for embedding>" in your terminal.

gusye1234 commented 1 month ago

LGTM! Do you finish your PR? If so, I will fix some small things and merge this PR

SliverBulle commented 1 month ago

LGTM! Do you finish your PR? If so, I will fix some small things and merge this PR

I think it's finished, thanks for your time.