Open Defozo opened 1 day ago
During parsing, use LLM to extract entities, replations, comunities, then, every entity/relation/community store in ES. During inference, use keywords to retrieve related entity/relation/community, then, extract the answer by LLM.
The code is in folder graphrag/.
Does it currently work with any agent workflow? Eg. with a "General-purpose chatbot"?
Describe your problem
What is being done during file parsing and where is it stored (the knowledge graph)? What is being done during inferencing?