run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
36.12k stars 5.15k forks source link

Raw response content: b'{"status":{"error":"Wrong input: Not existing vector name error: "},"time":0.030060406}' #13020

Closed Zeeshan0123 closed 2 months ago

Zeeshan0123 commented 5 months ago

Bug Description

i am getting this error i dont khow how to solve this

Version

new version

Steps to Reproduce

from qdrant_client import QdrantClient from llama_index.vector_stores.qdrant import QdrantVectorStore from llama_index.storage.storage_context import StorageContext from llama_index import VectorStoreIndex

Initialize the Qdrant client

client = QdrantClient(url="http://localhost:6333/dashboard", prefer_grpc=False, timeout=100)

Specify the collection name

collection_name = "med_embeddings"

Check if the collection exists

if not client.collection_exists(collection_name):

If the collection doesn't exist, create it

client.create_collection(
    collection_name=collection_name,
    vectors_config={
        "base_nodes": models.VectorParams(size=786, distance=models.Distance.DOT),
        "objects": models.VectorParams(size=786, distance=models.Distance.COSINE),
    }
)

Initialize the Qdrant vector store

vector_store = QdrantVectorStore(client=client, collection_name=collection_name, embed_model=embed_model)

Get the nodes from the documents

node_parser = MarkdownElementNodeParser(llm=None, num_workers=8) nodes = node_parser.get_nodes_from_documents(documents, progress=True)

Get the storage context

st = StorageContext.from_defaults(vector_store=vector_store)

Get the base nodes and objects

base_nodes, objects = node_parser.get_nodes_and_objects(nodes)

Create the index

recursive_index = VectorStoreIndex(nodes=base_nodes+objects, storage_context=st)

Relevant Logs/Tracbacks

No response

dosubot[bot] commented 5 months ago

The error you're encountering, "Not existing vector name error," suggests there's a discrepancy between the vector names in your code and those expected by your Qdrant collection schema. To address this issue, ensure the following:

  1. Verify Vector Names: Check that the vector names in your operations match the expected names in the Qdrant collection. These names are typically defined as constants in the QdrantVectorStore class, such as DENSE_VECTOR_NAME for dense vectors.

  2. Collection Configuration: Make sure your collection is correctly initialized with the vector configurations that match your operations. This includes setting up both dense and sparse vectors if your application requires hybrid search capabilities.

  3. Correct Use of Vector Names: When adding points or querying the collection, ensure the vector names used align with those defined in your collection's schema. This is crucial for operations to execute successfully.

  4. Dynamic Vector Name Handling: If your application involves sparse vectors, be aware that the vector name might dynamically switch between SPARSE_VECTOR_NAME_OLD and SPARSE_VECTOR_NAME based on the collection's setup. Your code should accommodate this dynamic selection.

If these steps do not resolve the issue, it might indicate a deeper problem with how the collection was initialized or a misconfiguration in your Qdrant setup. Double-check the collection's existence, accessibility, and schema configuration.

Additionally, you mentioned using the "new version" of LlamaIndex. It's important to ensure that your version aligns with the latest or the expected version for your implementation, as solutions might vary across different versions.

#### Details **Sources** - [llama-index-legacy/llama_index/legacy/vector_stores/qdrant.py](https://github.com/run-llama/llama_index/blob/main/llama-index-legacy/llama_index/legacy/vector_stores/qdrant.py) - [llama-index-integrations/vector_stores/llama-index-vector-stores-qdrant/llama_index/vector_stores/qdrant/base.py](https://github.com/run-llama/llama_index/blob/main/llama-index-integrations/vector_stores/llama-index-vector-stores-qdrant/llama_index/vector_stores/qdrant/base.py) - [llama-index-legacy/llama_index/legacy/readers/qdrant.py](https://github.com/run-llama/llama_index/blob/main/llama-index-legacy/llama_index/legacy/readers/qdrant.py) - [llama-index-integrations/readers/llama-index-readers-qdrant/llama_index/readers/qdrant/base.py](https://github.com/run-llama/llama_index/blob/main/llama-index-integrations/readers/llama-index-readers-qdrant/llama_index/readers/qdrant/base.py) - [llama-index-legacy/VERSION](https://github.com/run-llama/llama_index/blob/main/llama-index-legacy/VERSION) - [llama-index-core/llama_index/core/__init__.py](https://github.com/run-llama/llama_index/blob/main/llama-index-core/llama_index/core/__init__.py)
**About Dosu** This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

logan-markewich commented 5 months ago

@Zeeshan0123 I believe I fixed this, just update the package

pip install -U llama-index-vector-stores-qdrant