Open KeremKurban opened 2 months ago
I got the same answer.
I did the following analysis: get the data table (5 records) extracted by map_system_prompt, send the content of map_system_prompt to LLM, and did not get the reply in json format that map_system_prompt expects. However, when map_system_prompt extracts the data table with 4 records, it gets the reply in json format set in map_system_prompt. I don't understand what causes this.
Also, I would like to ask where the contents of the data table extracted by map_system_prompt come from?
可能内容过于敏感了
the same problem:I am sorry but I am unable to answer this question given the provided data
Warning: All map responses have score 0 (i.e., no relevant information found from the dataset), returning a canned 'I do not know' answer. You can try enabling allow_general_knowledge
to encourage the LLM to incorporate relevant general knowledge, at the risk of increasing hallucinations.
Do you need to file an issue?
Describe the bug
I gave a scientific article to extract entities and keywords but the global search is very sensitive to the questions i ask. As stated in the graphrag paper, graphRAG should perform well across global summarization tasks, hence i tried to get keywords out of the document with query :
Where I also added a print statement after this line. I see the
search_response
variable has the correct answerHowever further in the code i think there is a problem parsing this answer and i get
Other times , i was getting json decode error as well depending on the question. So i cannot trust this tool yet for datasets of bigger size.
Steps to reproduce
ca1_paper_grobid.txt
Use the attached document to run the toolbox and just ask questions as i did
python -m graphrag.query --root ./ragtest/ --method global "Find 5 keywords to describe this document in the order of importance, NO explanations of the keyword."
Expected Behavior
GraphRAG Config Used
Logs and screenshots
Additional Information