run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
35.59k stars 5.02k forks source link

[Bug]: Event loop issue with OpensearchVectorClient during streaming response #12675

Closed ttozser closed 5 months ago

ttozser commented 5 months ago

Bug Description

I create an AsyncContentStream to return a StreamingResponse with FastAPI. When I try to initialize OpensearchVectorClient I get the following exception

The same code used to work and it broke after this PR was merged https://github.com/run-llama/llama_index/pull/11513

Version

llama-index-core = 0.10.27, llama-index-vector-stores-opensearch = 0.1.8

Steps to Reproduce

Create an AsyncContentStream and return a StreamingResponse with FastAPI. Try to initialize the OpensearchVectorClient inside the AsyncContentStream

Relevant Logs/Tracbacks

|     return OpensearchVectorClient(opensearch_url, tenant, 1536,
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "...\Lib\site-packages\llama_index\vector_stores\opensearch\base.py", line 324, in __init__
    |     event_loop.run_until_complete(
    |   File "...\Lib\asyncio\base_events.py", line 629, in run_until_complete
    |     self._check_running()
    |   File "...\Lib\asyncio\base_events.py", line 588, in _check_running
    |     raise RuntimeError('This event loop is already running')
    | RuntimeError: This event loop is already running

-----------------------

    |     return OpensearchVectorClient(opensearch_url, tenant, 1536,
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "...\Lib\site-packages\llama_index\vector_stores\opensearch\base.py", line 322, in __init__
    |     event_loop = asyncio.get_event_loop()
    |                  ^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "...\Lib\asyncio\events.py", line 677, in get_event_loop
    |     raise RuntimeError('There is no current event loop in thread %r.'
    | RuntimeError: There is no current event loop in thread 'AnyIO worker thread'.
dosubot[bot] commented 5 months ago

Hey @ttozser, good to see you around here again! 🚀 I'm diving into your issue right now and will circle back with insights shortly. Hang tight! 🛸

ttozser commented 5 months ago

@ahs8w maybe you can help me with this issue thank you in advance

dosubot[bot] commented 5 months ago

@ahs8w, it looks like @ttozser is seeking your expertise on an issue with OpensearchVectorClient during streaming response in FastAPI. Could you please assist? Thank you in advance!

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.
logan-markewich commented 5 months ago

@ttozser use asyncio loop type, enable nesting.

import nest_asyncio 
nest_asyncio.apply()

...
uvicorn.run(..., loop="asyncio")
ttozser commented 5 months ago

Thank you. This fixes the problem.

zqf03118 commented 2 months ago

i use this, can only use one request,second request happen error, RuntimeError: Timeout context manager should be used inside a task,why?