run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
35.53k stars 5.02k forks source link

[Bug]: VectorStoreIndex with async=True not working when using QdrantVectorStore #16143

Open pydv9991 opened 2 hours ago

pydv9991 commented 2 hours ago

Bug Description

async functionality not working llama-index-core==0.10.65 llama-index-vector-stores-qdrant==0.2.14

Version

llama-index==0.10.65

Steps to Reproduce

try using qdrant vector store with vector store index with async = True it is not able to initialize, I am using proper await or asyncio.run but still its not working RuntimeWarning: Enable tracemalloc to get the object allocation traceback /home/ubuntu/testing/codex_genai/codex_ai/modelling/qdrant.py:227: RuntimeWarning: coroutine 'VectorStoreIndex._async_add_nodes_to_index' was never awaited return False RuntimeWarning: Enable tracemalloc to get the object allocation traceback

Relevant Logs/Tracbacks

RuntimeWarning: Enable tracemalloc to get the object allocation traceback
/home/ubuntu/testing/codex_genai/codex_ai/modelling/qdrant.py:227: RuntimeWarning: coroutine 'VectorStoreIndex._async_add_nodes_to_index' was never awaited
  return False
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
dosubot[bot] commented 2 hours ago

Hey @pydv9991! I see you're having trouble with the async functionality in VectorStoreIndex when using QdrantVectorStore. Unfortunately, I can't help with this specific issue.

I'm a bot designed to help solve bugs, answer technical questions, and gather engineering knowledge. If you think I should have answered this request, feel free to reach out to the Dosu Team.

To continue talking to Dosu, mention @dosu.