Closed xxWeiDG closed 4 months ago
got the same issue
same issue
Find out that you should put this fline in the settings.yaml
model_supports_json: false # recommended if this is available for your model.
Find out that you should put this fline in the settings.yaml
model_supports_json: false # recommended if this is available for your model.
still got the same issue even after deleting the cache folder
Find out that you should put this fline in the settings.yaml
model_supports_json: false # recommended if this is available for your model.
It's work for me.LLM is GLM4, Embedding is bce-embedding-base_v1
I got the same issue and after changing to a stronger LLM, in my case Qwen2-72B-Instruct, the issue was disappeared(unfortunately glm-4 didn't work, but I guess if executing multi times, it might succeed).
checking the log, we can find out the reason is that LLM returns a markdown format json(starting with ```json) instead of a pure string. I added a log below and here is what a successful input looked like.
22:14:22,462 graphrag.llm.openai.utils INFO ####input: {
"title": "Scrooge and Marley's Business Community",
"summary": "The community is centered around Scrooge, a prominent figure in Charles Dickens' 'A Christmas Carol,' who is characterized by his cold, unsympathetic nature and tight-fisted business practices. Scrooge is associated with the business Scrooge and Marley, where he was a partner with Marley, and is linked to various entities such as beggars, blind men's dogs, and children, all of whom avoid him, indicating his negative reputation. The community also includes Marley's funeral, Christmas Eve, and the city where Scrooge's counting-house is located, all of which play significant roles in the narrative.",
"rating": 7.0,
...
Just pulled the latest version (a0caadb320c5db4e7b8e83625f00c19be893170b) and am stilling seeing the KeyError: 'community' error during the create_final_community_reports phase. Using ollama with gemma2:27b-instruct-q4_0. This error occurs prior to making any embeddings requests. model_supports_json is set false.
Find out that you should put this fline in the settings.yaml model_supports_json: false # recommended if this is available for your model.
It's work for me.LLM is GLM4, Embedding is bce-embedding-base_v1
Can you share how you work it? I use GLM-4 but failed
Same issue with qwen2-1.5b-instruct-q6_K (ollama) + model_supports_json set to false.
Describe the issue
Steps to reproduce
No response
GraphRAG Config Used
No response
Logs and screenshots
Traceback (most recent call last): File "/usr/local/lib/python3.11/site-packages/graphrag/llm/openai/utils.py", line 93, in try_parse_json_object result = json.loads(input) ^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/json/init.py", line 346, in loads return _default_decoder.decode(s) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 2 column 1 (char 1) 03:00:03,496 graphrag.index.graph.extractors.community_reports.community_reports_extractor ERROR error generating community report Traceback (most recent call last): File "/usr/local/lib/python3.11/site-packages/graphrag/index/graph/extractors/community_reports/community_reports_extractor.py", line 58, in call await self._llm( File "/usr/local/lib/python3.11/site-packages/graphrag/llm/openai/json_parsing_llm.py", line 34, in call result = await self._delegate(input, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/graphrag/llm/openai/openai_token_replacing_llm.py", line 37, in call return await self._delegate(input, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/graphrag/llm/openai/openai_history_tracking_llm.py", line 33, in call output = await self._delegate(input, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/graphrag/llm/base/caching_llm.py", line 104, in call result = await self._delegate(input, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/graphrag/llm/base/rate_limiting_llm.py", line 177, in call result, start = await execute_with_retry() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/graphrag/llm/base/rate_limiting_llm.py", line 159, in execute_with_retry async for attempt in retryer: File "/usr/local/lib/python3.11/site-packages/tenacity/asyncio/init.py", line 166, in anext do = await self.iter(retry_state=self._retry_state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/tenacity/asyncio/init.py", line 153, in iter result = await action(retry_state) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/tenacity/_utils.py", line 99, in inner return call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/tenacity/init.py", line 398, in
self._add_action_func(lambda rs: rs.outcome.result())
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/concurrent/futures/_base.py", line 401, in get_result
raise self._exception
File "/usr/local/lib/python3.11/site-packages/graphrag/llm/base/rate_limiting_llm.py", line 165, in execute_with_retry
return await do_attempt(), start
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/graphrag/llm/base/rate_limiting_llm.py", line 147, in do_attempt
return await self._delegate(input, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/graphrag/llm/base/base_llm.py", line 48, in call
return await self._invoke_json(input, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/graphrag/llm/openai/openai_chat_llm.py", line 82, in _invoke_json
result = await generate()
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/graphrag/llm/openai/openai_chat_llm.py", line 74, in generate
await self._native_json(input, {kwargs, "name": call_name})
File "/usr/local/lib/python3.11/site-packages/graphrag/llm/openai/openai_chat_llm.py", line 108, in _native_json
json_output = try_parse_json_object(raw_output)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/graphrag/llm/openai/utils.py", line 93, in try_parse_json_object
result = json.loads(input)
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/json/init.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 2 column 1 (char 1)
03:00:03,498 graphrag.index.reporting.file_workflow_callbacks INFO Community Report Extraction Error details=None
03:00:03,498 graphrag.index.verbs.graph.report.strategies.graph_intelligence.run_graph_intelligence WARNING No report found for community: 0
03:00:03,559 datashaper.workflow.workflow INFO executing verb window
03:00:03,559 datashaper.workflow.workflow ERROR Error executing verb "window" in create_final_community_reports: 'community'
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/datashaper/workflow/workflow.py", line 410, in _execute_verb
result = node.verb.func(verb_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/datashaper/engine/verbs/window.py", line 73, in window
window = __window_function_mapwindow_operation