microsoft / graphrag

A modular graph-based Retrieval-Augmented Generation (RAG) system
https://microsoft.github.io/graphrag/
MIT License
19.38k stars 1.92k forks source link

[Bug]: search_response in search.py has correct information but llm responds ' I am sorry but I am unable to answer this question given the provided data' #942

Open KeremKurban opened 3 months ago

KeremKurban commented 3 months ago

Do you need to file an issue?

Describe the bug

I gave a scientific article to extract entities and keywords but the global search is very sensitive to the questions i ask. As stated in the graphrag paper, graphRAG should perform well across global summarization tasks, hence i tried to get keywords out of the document with query :

"Find 5 keywords to describe this document in the order of importance, NO explanations of the keyword."

Where I also added a print statement after this line. I see the search_response variable has the correct answer

Search response: { "keywords": [ "Hippocampus", "Theta Oscillations", "Memory Processing", "CA1 Region", "Neurogenesis" ] }

However further in the code i think there is a problem parsing this answer and i get

SUCCESS: Global Search Response: I am sorry but I am unable to answer this question given the provided data.

Other times , i was getting json decode error as well depending on the question. So i cannot trust this tool yet for datasets of bigger size.

Steps to reproduce

ca1_paper_grobid.txt

Use the attached document to run the toolbox and just ask questions as i did

python -m graphrag.query --root ./ragtest/ --method global "Find 5 keywords to describe this document in the order of importance, NO explanations of the keyword."

Expected Behavior

    "Hippocampus",
    "Theta Oscillations",
    "Memory Processing",
    "CA1 Region",
    "Neurogenesis"

GraphRAG Config Used


encoding_model: cl100k_base
skip_workflows: []
llm:
  api_key: ${GRAPHRAG_API_KEY}
  type: openai_chat # or azure_openai_chat
  model: gpt-4o-mini
  model_supports_json: true # recommended if this is available for your model.
  # max_tokens: 4000
  # request_timeout: 180.0
  # api_base: https://<instance>.openai.azure.com
  # api_version: 2024-02-15-preview
  # organization: <organization_id>
  # deployment_name: <azure_model_deployment_name>
  # tokens_per_minute: 150_000 # set a leaky bucket throttle
  # requests_per_minute: 10_000 # set a leaky bucket throttle
  # max_retries: 10
  # max_retry_wait: 10.0
  # sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times
  # concurrent_requests: 25 # the number of parallel inflight requests that may be made
  # temperature: 0 # temperature for sampling
  # top_p: 1 # top-p sampling
  # n: 1 # Number of completions to generate

parallelization:
  stagger: 0.3
  # num_threads: 50 # the number of threads to use for parallel processing

async_mode: threaded # or asyncio

embeddings:
  ## parallelization: override the global parallelization settings for embeddings
  async_mode: threaded # or asyncio
  llm:
    api_key: ${GRAPHRAG_API_KEY}
    type: openai_embedding # or azure_openai_embedding
    model: text-embedding-3-small
    # api_base: https://<instance>.openai.azure.com
    # api_version: 2024-02-15-preview
    # organization: <organization_id>
    # deployment_name: <azure_model_deployment_name>
    # tokens_per_minute: 150_000 # set a leaky bucket throttle
    # requests_per_minute: 10_000 # set a leaky bucket throttle
    # max_retries: 10
    # max_retry_wait: 10.0
    # sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times
    # concurrent_requests: 25 # the number of parallel inflight requests that may be made
    # batch_size: 16 # the number of documents to send in a single request
    # batch_max_tokens: 8191 # the maximum number of tokens to send in a single request
    # target: required # or optional

chunks:
  size: 1200
  overlap: 100
  group_by_columns: [id] # by default, we don't allow chunks to cross documents

input:
  type: file # or blob
  file_type: text # or csv
  base_dir: "input"
  file_encoding: utf-8
  file_pattern: ".*\\.txt$"

cache:
  type: file # or blob
  base_dir: "cache"
  # connection_string: <azure_blob_storage_connection_string>
  # container_name: <azure_blob_storage_container_name>

storage:
  type: file # or blob
  base_dir: "output/${timestamp}/artifacts"
  # connection_string: <azure_blob_storage_connection_string>
  # container_name: <azure_blob_storage_container_name>

reporting:
  type: file # or console, blob
  base_dir: "output/${timestamp}/reports"
  # connection_string: <azure_blob_storage_connection_string>
  # container_name: <azure_blob_storage_container_name>

entity_extraction:
  ## llm: override the global llm settings for this task
  ## parallelization: override the global parallelization settings for this task
  ## async_mode: override the global async_mode settings for this task
  prompt: "prompts/entity_extraction.txt"
  entity_types: [organization,person,geo,event]
  max_gleanings: 1

summarize_descriptions:
  ## llm: override the global llm settings for this task
  ## parallelization: override the global parallelization settings for this task
  ## async_mode: override the global async_mode settings for this task
  prompt: "prompts/summarize_descriptions.txt"
  max_length: 500

claim_extraction:
  ## llm: override the global llm settings for this task
  ## parallelization: override the global parallelization settings for this task
  ## async_mode: override the global async_mode settings for this task
  # enabled: true
  prompt: "prompts/claim_extraction.txt"
  description: "Any claims or facts that could be relevant to information discovery."
  max_gleanings: 1

community_reports:
  ## llm: override the global llm settings for this task
  ## parallelization: override the global parallelization settings for this task
  ## async_mode: override the global async_mode settings for this task
  prompt: "prompts/community_report.txt"
  max_length: 2000
  max_input_length: 8000

cluster_graph:
  max_cluster_size: 10

embed_graph:
  enabled: false # if true, will generate node2vec embeddings for nodes
  # num_walks: 10
  # walk_length: 40
  # window_size: 2
  # iterations: 3
  # random_seed: 597832

umap:
  enabled: false # if true, will generate UMAP embeddings for nodes

snapshots:
  graphml: false
  raw_entities: false
  top_level_nodes: false

local_search:
  # text_unit_prop: 0.5
  # community_prop: 0.1
  # conversation_history_max_turns: 5
  # top_k_mapped_entities: 10
  # top_k_relationships: 10
  # llm_temperature: 0 # temperature for sampling
  # llm_top_p: 1 # top-p sampling
  # llm_n: 1 # Number of completions to generate
  # max_tokens: 12000

global_search:
  # llm_temperature: 0 # temperature for sampling
  # llm_top_p: 1 # top-p sampling
  # llm_n: 1 # Number of completions to generate
  # max_tokens: 12000
  # data_max_tokens: 12000
  # map_max_tokens: 1000
  # reduce_max_tokens: 2000
  # concurrency: 32

Logs and screenshots

INFO: Reading settings from ragtest/settings.yaml creating llm client with {'api_key': 'REDACTED,len=56', 'type': "openai_chat", 'model': 'gpt-4o-mini', 'max_tokens': 4000, 'temperature': 0.0, 'top_p': 1.0, 'n': 1, 'request_timeout': 180.0, 'api_base': None, 'api_version': None, 'organization': None, 'proxy': None, 'cognitive_services_endpoint': None, 'deployment_name': None, 'model_supports_json': True, 'tokens_per_minute': 0, 'requests_per_minute': 0, 'max_retries': 10, 'max_retry_wait': 10.0, 'sleep_on_rate_limit_recommendation': True, 'concurrent_requests': 25} Search response: { "keywords": [ "Hippocampus", "Theta Oscillations", "Memory Processing", "CA1 Region", "Neurogenesis" ] } Warning: All map responses have score 0 (i.e., no relevant information found from the dataset), returning a canned 'I do not know' answer. You can try enabling allow_general_knowledge to encourage the LLM to incorporate relevant general knowledge, at the risk of increasing hallucinations.

SUCCESS: Global Search Response: I am sorry but I am unable to answer this question given the provided data.

Additional Information

yangxue-1 commented 3 months ago

I got the same answer.

I did the following analysis: get the data table (5 records) extracted by map_system_prompt, send the content of map_system_prompt to LLM, and did not get the reply in json format that map_system_prompt expects. However, when map_system_prompt extracts the data table with 4 records, it gets the reply in json format set in map_system_prompt. I don't understand what causes this.

Also, I would like to ask where the contents of the data table extracted by map_system_prompt come from?

guangxiangdebizi commented 3 months ago

可能内容过于敏感了

redpintings commented 1 month ago

the same problem:I am sorry but I am unable to answer this question given the provided data

Warning: All map responses have score 0 (i.e., no relevant information found from the dataset), returning a canned 'I do not know' answer. You can try enabling allow_general_knowledge to encourage the LLM to incorporate relevant general knowledge, at the risk of increasing hallucinations.

WangAo-0 commented 1 week ago
❯ graphrag query \
--root ./ragtest \
--method global \
--query "What are the top themes in this story?"

creating llm client with {'api_key': 'REDACTED,len=6', 'type': "openai_chat", 'model': 'myqwen2.5', 'max_tokens': 4000, 'temperature': 0.0, 'top_p': 1.0, 'n': 1, 'request_timeout': 1800.0, 'api_base': 'http://2.ndsl:11434/v1', 'api_version': None, 'organization': None, 'proxy': None, 'audience': None, 'deployment_name': None, 'model_supports_json': True, 'tokens_per_minute': 0, 'requests_per_minute': 0, 'max_retries': 10, 'max_retry_wait': 10.0, 'sleep_on_rate_limit_recommendation': True, 'concurrent_requests': 25}
----------------------------------------------------
To determine the top themes in a story, I would need to know the specific story you're referring to. Could you please provide more details about the story, such as its title, author, or a summary of key events and characters? This information will help me identify the main themes accurately.
----------------------------------------------------
not expected dict type. type=<class 'str'>:
Traceback (most recent call last):
  File "/home/oliver/graphrag/graphrag/llm/openai/utils.py", line 133, in try_parse_json_object
    result = json.loads(input)
             ^^^^^^^^^^^^^^^^^
  File "/home/oliver/.conda/envs/graphrag/lib/python3.12/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/oliver/.conda/envs/graphrag/lib/python3.12/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/oliver/.conda/envs/graphrag/lib/python3.12/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
----------------------------------------------------
To determine the top themes in a story, I would need to know the specific story you're referring to. Could you please provide more details about the story, such as its title, author, or a summary of key events and characters? This information will help me identify the main themes accurately.
----------------------------------------------------
not expected dict type. type=<class 'str'>:
Traceback (most recent call last):
  File "/home/oliver/graphrag/graphrag/llm/openai/utils.py", line 133, in try_parse_json_object
    result = json.loads(input)
             ^^^^^^^^^^^^^^^^^
  File "/home/oliver/.conda/envs/graphrag/lib/python3.12/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/oliver/.conda/envs/graphrag/lib/python3.12/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/oliver/.conda/envs/graphrag/lib/python3.12/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
----------------------------------------------------
To accurately identify the top themes in a story, I would need to know the specific story you're referring to. Could you please provide more details about the story, such as its title, author, or a summary of key events and characters? This information will help me analyze the themes effectively.
----------------------------------------------------
not expected dict type. type=<class 'str'>:
Traceback (most recent call last):
  File "/home/oliver/graphrag/graphrag/llm/openai/utils.py", line 133, in try_parse_json_object
    result = json.loads(input)
             ^^^^^^^^^^^^^^^^^
  File "/home/oliver/.conda/envs/graphrag/lib/python3.12/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/oliver/.conda/envs/graphrag/lib/python3.12/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/oliver/.conda/envs/graphrag/lib/python3.12/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
----------------------------------------------------
To determine the top themes in a story, I would need to know the specific story you're referring to. Could you please provide more details about the story, such as its title, author, or a summary of key events and characters? This information will help me identify the main themes accurately.
----------------------------------------------------
not expected dict type. type=<class 'str'>:
Traceback (most recent call last):
  File "/home/oliver/graphrag/graphrag/llm/openai/utils.py", line 133, in try_parse_json_object
    result = json.loads(input)
             ^^^^^^^^^^^^^^^^^
  File "/home/oliver/.conda/envs/graphrag/lib/python3.12/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/oliver/.conda/envs/graphrag/lib/python3.12/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/oliver/.conda/envs/graphrag/lib/python3.12/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Warning: All map responses have score 0 (i.e., no relevant information found from the dataset), returning a canned 'I do not know' answer. You can try enabling `allow_general_knowledge` to encourage the LLM to incorporate relevant general knowledge, at the risk of increasing hallucinations.

SUCCESS: Global Search Response:
I am sorry but I am unable to answer this question given the provided data.
❯ 

same question