Closed cocoza4 closed 2 months ago
🤖
Hello @cocoza4! 🦙
I'm Dosu, an AI assistant here to help with your LlamaIndex issues, answer your questions, and guide you through contributions while we wait for a human maintainer. Just think of me as your friendly neighborhood bot! I look forward to assisting you.
The RuntimeError "Session and connector has to use same event loop" is typically raised when you are trying to use an asynchronous session or connector that was created in a different event loop than the one you are currently in.
In the context of LlamaIndex, this error might occur when calling query_engine.query()
for the second time because the SQLJoinQueryEngine
class, which is responsible for handling queries, is not properly managing its asynchronous tasks.
The _aquery
method in the SQLJoinQueryEngine
class is marked as an asynchronous method, but it's actually calling a synchronous method _query
. This could potentially cause issues with the event loop, especially if you're calling query_engine.query()
multiple times in an asynchronous context.
Here is the relevant code:
async def _aquery(self, query_bundle: QueryBundle) -> RESPONSE_TYPE:
# TODO: make async
return self._query(query_bundle)
To fix this issue, you should ensure that the _aquery
method is truly asynchronous. This might involve refactoring the _query
method and any methods it calls to be asynchronous as well.
Please note that without the complete code and the exact scenario in which the error occurs, this is just a probable cause and solution. The actual issue might be different based on how and where you're calling query_engine.query()
.
As for the function of the query_engine.query()
method in the LlamaIndex codebase, it is designed to handle queries to the LlamaIndex. It accepts a QueryType
object as an argument, which can either be a string or a QueryBundle
object.
If the input is a string, it is converted into a QueryBundle
object. The method then calls the _query()
method with the QueryBundle
object as an argument. The _query()
method is an abstract method, meaning it is expected to be implemented by any class that inherits from BaseQueryEngine
.
The query()
method is also wrapped in a context manager provided by self.callback_manager.as_trace("query")
. This suggests that the method is designed to be used with some form of callback system, which can be used to trace the execution of the query.
Here is the relevant code:
def query(self, str_or_query_bundle: QueryType) -> RESPONSE_TYPE:
with self.callback_manager.as_trace("query"):
if isinstance(str_or_query_bundle, str):
str_or_query_bundle = QueryBundle(str_or_query_bundle)
return self._query(str_or_query_bundle)
I hope this helps! If you have any more questions or need further clarification, feel free to ask.
Sources:
-Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@dosubot I have the same problem. How can I use Elasticsearch as a synchronized client ? I`ve got this response when I try to use synchronized client:
connection_params: Dict[str, Any] = {
"hosts": ["https://10.206.20.47:9200"],
"basic_auth": ("elastic", "pKgZ2rSFXlKHY8o9T2Q+"),
"verify_certs": False
}
sync_es_client = elasticsearch.Elasticsearch(**connection_params)
es_store = ElasticsearchStore(
index_name="sk_index",
es_client=sync_es_client
)
Got this error:
File "C:\Users\01420524\.conda\envs\sk-gpt\lib\asyncio\futures.py", line 201, in result
raise self._exception.with_traceback(self._exception_tb)
File "C:\Users\01420524\.conda\envs\sk-gpt\lib\asyncio\tasks.py", line 232, in __step
result = coro.send(None)
File "C:\Users\01420524\.conda\envs\sk-gpt\lib\site-packages\llama_index\vector_stores\elasticsearch\base.py", line 383, in async_add
await self._create_index_if_not_exists(
File "C:\Users\01420524\.conda\envs\sk-gpt\lib\site-packages\llama_index\vector_stores\elasticsearch\base.py", line 281, in _create_index_if_not_exists
if await self.client.indices.exists(index=index_name):
TypeError: object HeadApiResponse can't be used in 'await' expression
How can I solve this ?
Bug Description
I'm unable to create a sample app that recreates this issue, but it seems to be related to https://stackoverflow.com/questions/57678844/error-using-shared-tcpconnector-in-aiohttp
However, i notice that when I recreate
vector_store
instance in every request to elasticsearch the problem goes away.Version
0.9.24
Steps to Reproduce
N/A
Relevant Logs/Tracbacks