Is your feature request related to a problem? Please describe.
As of now, we don't have an example to converse with LLM powered by GraphRAG. I see the code in the notebook of the documentation. I've look up the source code and indeed we can pass a history when constructing a context_builder. But we don't have a client yet.
Describe the solution you'd like
Implement a client that given processed knowledge graph, can run conversations with multiple turns.
Additional context
I'd like to help implement this, but I have a few questions:
How can we determine when to use global search and when to use local search? It seems we also need LLM to determine if a question/query is global or local based on the conversation history.
How can I find the content of references? References like [Data: Reports (377, 327, 182)] are inserted into assistant's answer, but these are indices, how can I use these indices to find the content?
This issue has been marked stale due to inactivity after repo maintainer or community member responses that request more information or suggest a solution. It will be closed after five additional days.
Is your feature request related to a problem? Please describe.
As of now, we don't have an example to converse with LLM powered by GraphRAG. I see the code in the notebook of the documentation. I've look up the source code and indeed we can pass a history when constructing a context_builder. But we don't have a client yet.
Describe the solution you'd like
Implement a client that given processed knowledge graph, can run conversations with multiple turns.
Additional context
I'd like to help implement this, but I have a few questions:
[Data: Reports (377, 327, 182)]
are inserted into assistant's answer, but these are indices, how can I use these indices to find the content?