Closed mdciri closed 2 months ago
Hey, @mdciri! I'm here to help you with your bug report. I can assist with bug fixes, answer your questions, and guide you on becoming a contributor. Let's squash this bug together!
Your script encounters a RuntimeError
due to nested async detected and a NotImplementedError
from get_responses
because the BaseAutoRetriever
class in the LlamaIndex framework has both synchronous and asynchronous methods for generating retrieval specs and retrieving data. If you are using the asynchronous methods (agenerate_retrieval_spec
and _aretrieve
), you need to ensure that all parts of your code are compatible with async execution.
To fix this issue, ensure that you are not mixing synchronous and asynchronous calls. If you intend to use async methods, make sure that your entire call stack supports async execution. Here is an example of how you can use the async methods correctly:
import asyncio
from llama_index.legacy.schema import QueryBundle
from llama_index.legacy.core.base_auto_retriever import BaseAutoRetriever
async def main():
# Assuming you have already initialized `retriever` and `query_bundle`
query_engine = BaseAutoRetriever(retriever=retriever)
# Use the async method to query
response = await query_engine._aretrieve(query_bundle)
print(response)
# Run the async main function
asyncio.run(main())
Ensure that your retriever
and any other components used within BaseAutoRetriever
also support async operations. This should resolve the RuntimeError
due to nested async detected.
Regarding the NotImplementedError
from get_responses
, it seems that the get_responses
method is not implemented in the provided BaseAutoRetriever
class. According to the BaseAutoRetriever
class in the LlamaIndex framework, you should use the generate_retrieval_spec
or agenerate_retrieval_spec
methods to generate the retrieval spec and then use _retrieve
or _aretrieve
methods to retrieve the data. Here is an example of how you can use these methods:
from llama_index.legacy.schema import QueryBundle
from llama_index.legacy.core.base_auto_retriever import BaseAutoRetriever
# Assuming you have already initialized `retriever` and `query_bundle`
query_engine = BaseAutoRetriever(retriever=retriever)
# Use the synchronous method to retrieve
response = query_engine._retrieve(query_bundle)
print(response)
Make sure that the class you are using provides a concrete implementation of the required methods, and avoid calling any unimplemented methods. This should resolve the NotImplementedError
.
Additionally, if you are using the get_responses
function from the llama_index.legacy.evaluation.eval_utils
module, ensure you are using it correctly. The get_responses
function is a synchronous wrapper around the aget_responses
async function. Here is an example of how to use it:
from llama_index.legacy.evaluation.eval_utils import get_responses
from llama_index.legacy.core.base_query_engine import BaseQueryEngine
# Assuming you have already initialized `query_engine` and `questions`
responses = get_responses(questions, query_engine)
print(responses)
This should help you avoid the NotImplementedError
and ensure that your script runs smoothly [1][2][3][4].
To continue talking to Dosu, mention @dosu.
my retriver is build with the class from AutoMergingRetriever
from llama_index.core.retrievers
which is a BaseRetriever
, as in: https://docs.llamaindex.ai/en/stable/examples/retrievers/auto_merging_retriever/.
@mdciri I just tried the notebook using the latest version of llama-index, and all functions run fine, including get_responses and evaluate
Please see the notebook here https://colab.research.google.com/drive/1evtawQnzZRuP4OksVUiyabPylCVlxFiu?usp=sharing
Not sure how you got to this state if nest_asyncio is applied
You need to give me access to the notebook. May it be the llm? I am using Bedrock provider and not OpenAI
In get_responses()
I give as input a list of sting as questions and the automerging_engine is the output of the RetrieverQueryEngine.from_args(retriever, llm=bedorck_llm)
You should have access now (whoops, thought I hit the give all access button initially)
If you ate using bedrock, the async methods aren't implemented, so that is probably the issue
Contributions welcome. I don't have access to bedrock to do it myself. Just need to use async boto3 I think
I see. So, it is the Bedrock provider the problem. Indeed it was strange that it was not working. Please, let me know when the issue is resolved and I can try again.
thanks a lot.
@logan-markewich do you have any news?
@mdciri we don't have access to bedrock to test right now. I welcome a contribution. You'd just need to implement the same as the sync methods, but using async boto3 sessions/clients
@logan-markewich unfortunately I think I need a bit more help 😅
Bug Description
I have a created a RAG with automerging retriver. it works and I am trying to evaluate it, but every time I call the functions as:
that need:
it breaks (see picture in attached), as:
Nevertheless, I can generate a response simply doing:
It works even if I do a loop like:
Same thing if I use:
I am following the example in https://docs.llamaindex.ai/en/stable/examples/retrievers/auto_merging_retriever/.
what is the problem? can you help me, please?
Version
0.10.43
Steps to Reproduce
Relevant Logs/Tracbacks
No response