run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
33.36k stars 4.67k forks source link

[Bug]: KnowledgeGraphQueryEngine occurs error when query #13585

Open yanyi74 opened 1 month ago

yanyi74 commented 1 month ago

Bug Description

ValidationError: 1 validation error for LLMPredictStartEvent template none is not an allowed value (type=type_error.none.not_allowed)

截屏2024-05-19 20 38 04

Version

0.10.37

Steps to Reproduce

截屏2024-05-19 20 39 57 截屏2024-05-19 20 38 04

Relevant Logs/Tracbacks

No response

dosubot[bot] commented 1 month ago

The ValidationError indicating that none is not an allowed value for template suggests that a None value is being passed where it is not permitted. This issue typically arises in scenarios where the KnowledgeGraphQueryEngine or its methods are expecting certain parameters to be provided, and those parameters are either missing or explicitly set to None.

To address this issue, follow these steps:

  1. Verify Constructor Arguments: Ensure the KnowledgeGraphQueryEngine is instantiated with all required arguments, especially storage_context, and that storage_context includes a valid graph_store.

  2. Check Method Calls: Make sure that the arguments passed to methods like generate_query are not None, unless allowed by the method's documentation.

  3. Optional Parameters: If using optional parameters, confirm they are correctly instantiated or handled when set to None.

  4. Debugging: Implement logging within the KnowledgeGraphQueryEngine methods to trace the values of parameters at runtime, helping to pinpoint where a None value might be introduced.

  5. Review Code for Dynamic Updates: If your implementation involves dynamically updating prompts or similar features, ensure these updates do not result in None values being passed where they are not expected.

By systematically checking these areas, you should be able to identify and correct the cause of the ValidationError. If the issue persists, creating a minimal reproducible example could further help in isolating and addressing the problem.

#### Details **Sources** - [llama-index-core/llama_index/core/query_engine/knowledge_graph_query_engine.py](https://github.com/run-llama/llama_index/blob/main/llama-index-core/llama_index/core/query_engine/knowledge_graph_query_engine.py) - [llama-index-legacy/llama_index/legacy/query_engine/knowledge_graph_query_engine.py](https://github.com/run-llama/llama_index/blob/main/llama-index-legacy/llama_index/legacy/query_engine/knowledge_graph_query_engine.py)
**About Dosu** This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

luisenriqueramos1977 commented 1 month ago

I have a similar issue, here is the code:

import os import openai from llama_index.llms.openai import OpenAI from llama_index.embeddings.openai import OpenAIEmbedding from llama_index.core.query_engine import KnowledgeGraphQueryEngine from llama_index.graph_stores.neo4j import Neo4jGraphStore from llama_index.core import StorageContext

define LLM

from llama_index.core import Settings

openai api conneting

os.environ["OPENAI_API_KEY"] = "sk-proj-mykey" openai.api_key = os.environ["OPENAI_API_KEY"]

llm=OpenAI(model_name="gpt-3.5-turbo")

service_context = ServiceContext.from_defaults(llm=llm, chunk_size=256)

llm = OpenAI(temperature=0, model="gpt-3.5-turbo") Settings.llm = llm Settings.chunk_size = 512

graph_store = Neo4jGraphStore(username="neo4j",password="mypassword", url="bolt://localhost:7687",database="neo4j")

storage_context = StorageContext.from_defaults(graph_store=graph_store)

query_engine = KnowledgeGraphQueryEngine( storage_context=storage_context, llm=Settings.llm, verbose=True, )

response = query_engine.query( "what is the query?", )

print(response)

youyunfeng commented 2 weeks ago

I have the same problem.

zyloveslego commented 2 weeks ago

I have the same problem.

remove "llm=Settings.llm", only: graph_rag_retriever = KnowledgeGraphRAGRetriever( storage_context=storage_context, verbose=True, )

Li-Yanzhi commented 1 week ago

Same issues here.

Li-Yanzhi commented 1 week ago

I have the same problem.

remove "llm=Settings.llm", only: graph_rag_retriever = KnowledgeGraphRAGRetriever( storage_context=storage_context, verbose=True, )

We are using KnowledgeGraphQueryEngine rather than KnowledgeGraphRAGRetriever