run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
33.35k stars 4.67k forks source link

[Feature Request]: Async support for Qdrant Vector store #14151

Open pradeep-suresh2002 opened 3 weeks ago

pradeep-suresh2002 commented 3 weeks ago

Feature Description

image Need Async support for Async QdrantClients

Reason

To create collections

Value of Feature

logan-markewich commented 3 weeks ago

@pradeep-suresh2002 async is supported, but you need to supply the async client

vector_store = QdrantVectorStore(
  ..., 
  client=qdrant_client.QdrantClient(...), 
  aclient=qdrant_client.AsyncQdrantClient(...)
)

However, sparse vector generation is not async, its compute bound to your machine, so it will be blocking either way

pradeep-suresh2002 commented 3 weeks ago

For creating AsyncClients, why we are giving both client and aclient? We can provide only aclient right? vector_store = QdrantVectorStore( ..., client=qdrant_client.QdrantClient(...), aclient=qdrant_client.AsyncQdrantClient(...) )

logan-markewich commented 2 weeks ago

Because the code uses both a sync and async client, and qdrant doesn't provide a way to create one client from the other