run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
33.44k stars 4.69k forks source link

[Bug]: #13586

Open rohan-patankar opened 1 month ago

rohan-patankar commented 1 month ago

Bug Description

Observation: Error: 'NoneType' object has no attribute 'search_batch' Iam getting this error when iam using qdrant + llamaindex agents

Version

0.10.16

Steps to Reproduce

retriever_query_engine = RetrieverQueryEngine.from_args(
    recursive_retriever,
    response_synthesizer=response_synthesizer,
    service_context=service_context,
    node_postprocessors=[cohere_rerank],
    use_async=False
)
query_engine_tools_ = [
QueryEngineTool(
    query_engine=retriever_query_engine,
    metadata=ToolMetadata(
        name="recursive_retriever",
        description=f"Recursive retriever for accessing documents"
    ),
)

] query_engine = SubQuestionQueryEngine.from_defaults( query_engine_tools=query_enginetools, service_context=service_context, use_async=False, llm=function_llm, verbose=True ) query_engine_tools = [ QueryEngineTool( query_engine=query_engine, metadata=ToolMetadata( name="qualitative_question_engine", description=f"""A query engine that can answer qualitative questions about a {doc_name} documents that the user pre-selected for the conversation.""".strip(), ), ), QueryEngineTool( query_engine=vector_query_engine, metadata=ToolMetadata( name=f"vector_tool", description=f"Useful for retreiving specific context from {doc_name}.", ), ) ] await image_links agent = ReActAgent.from_tools( query_engine_tools, llm=function_llm, verbose=True, memory=chat_memory, context=f"""Use the available tools provided when answering a question. Do NOT just blindly make things up about the car unless it is grounded by the retrieved sources.Make sure question is passed as it is in raw form without any translation.""", max_function_calls=3) image_res=image_links.result() return agent,top_sources,image_res,doc_context,chat_memory

while executing this code iam getting error Observation: Error: 'NoneType' object has no attribute 'search_batch'

Iam using qdrant db with async client support

Relevant Logs/Tracbacks

No response

dosubot[bot] commented 1 month ago

To address the "Error: 'NoneType' object has no attribute 'search_batch'" issue you're encountering with qdrant and llamaindex agents, it's crucial to ensure that the retriever object is properly initialized. This error typically indicates that the object expected to perform the search_batch operation is None, which often happens if the initialization of the retriever fails due to reasons such as incorrect API keys, environment setup issues, or misconfigurations.

Here's how you can troubleshoot and potentially resolve the issue:

  1. Verify API Key: Make sure the SCIPHI_API_KEY environment variable is correctly set and accessible in your environment. You can confirm it's correctly loaded by printing it out before initializing any components that depend on it.

  2. Check Initialization: Implement error handling around the initialization of components that rely on external services or API keys. This can help identify if a failure in this process is causing the retriever to be None.

  3. Validate Retriever: Before using the retriever, add a check to ensure it is not None. This can prevent attempts to perform operations on an uninitialized object and give you a clearer indication of where the initialization process might be failing.

By following these steps, you should be able to pinpoint the cause of the error and ensure that your retriever object is correctly initialized and ready for use.

#### Details **Sources** - [llama-index-packs/llama-index-packs-agent-search-retriever/examples/_example.py](https://github.com/run-llama/llama_index/blob/main/llama-index-packs/llama-index-packs-agent-search-retriever/examples/_example.py) - [llama-index-core/llama_index/core/query_engine/retriever_query_engine.py](https://github.com/run-llama/llama_index/blob/main/llama-index-core/llama_index/core/query_engine/retriever_query_engine.py) - [llama-index-core/llama_index/core/agent/react/base.py](https://github.com/run-llama/llama_index/blob/main/llama-index-core/llama_index/core/agent/react/base.py) - [llama-index-core/llama_index/core/agent/legacy/react/base.py](https://github.com/run-llama/llama_index/blob/main/llama-index-core/llama_index/core/agent/legacy/react/base.py) - [llama-index-legacy/llama_index/legacy/agent/react/base.py](https://github.com/run-llama/llama_index/blob/main/llama-index-legacy/llama_index/legacy/agent/react/base.py) - [llama-index-legacy/llama_index/legacy/evaluation/batch_runner.py](https://github.com/run-llama/llama_index/blob/main/llama-index-legacy/llama_index/legacy/evaluation/batch_runner.py) - [llama-index-legacy/llama_index/legacy/agent/legacy/react/base.py](https://github.com/run-llama/llama_index/blob/main/llama-index-legacy/llama_index/legacy/agent/legacy/react/base.py) - [docs/docs/examples/agent/react_agent_with_query_engine.ipynb](https://github.com/run-llama/llama_index/blob/main/docs/docs/examples/agent/react_agent_with_query_engine.ipynb) - [llama-index-core/llama_index/core/instrumentation/dispatcher.py](https://github.com/run-llama/llama_index/blob/main/llama-index-core/llama_index/core/instrumentation/dispatcher.py) - [llama-index-legacy/llama_index/legacy/llms/langchain.py](https://github.com/run-llama/llama_index/blob/main/llama-index-legacy/llama_index/legacy/llms/langchain.py) - [llama-index-core/llama_index/core/readers/file/base.py](https://github.com/run-llama/llama_index/blob/main/llama-index-core/llama_index/core/readers/file/base.py)
**About Dosu** This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

logan-markewich commented 1 month ago

Assuming this is qdrant? Configure the async client

QdrantVectorStore(...., client=QdrantClient(..), aclient=AsyncQdrantClient(...))