Closed ttozser closed 5 months ago
Hey @ttozser, good to see you around here again! 🚀 I'm diving into your issue right now and will circle back with insights shortly. Hang tight! 🛸
@ahs8w maybe you can help me with this issue thank you in advance
@ahs8w, it looks like @ttozser is seeking your expertise on an issue with OpensearchVectorClient
during streaming response in FastAPI. Could you please assist? Thank you in advance!
@ttozser use asyncio loop type, enable nesting.
import nest_asyncio
nest_asyncio.apply()
...
uvicorn.run(..., loop="asyncio")
Thank you. This fixes the problem.
i use this, can only use one request,second request happen error, RuntimeError: Timeout context manager should be used inside a task,why?
Bug Description
I create an AsyncContentStream to return a StreamingResponse with FastAPI. When I try to initialize OpensearchVectorClient I get the following exception
The same code used to work and it broke after this PR was merged https://github.com/run-llama/llama_index/pull/11513
Version
llama-index-core = 0.10.27, llama-index-vector-stores-opensearch = 0.1.8
Steps to Reproduce
Create an AsyncContentStream and return a StreamingResponse with FastAPI. Try to initialize the OpensearchVectorClient inside the AsyncContentStream
Relevant Logs/Tracbacks