milvus-io / pymilvus

Python SDK for Milvus.
Apache License 2.0
1k stars 321 forks source link

[Bug]: [Milvus 2.3.7-GPU] Fail to Query #1929

Open Nanjangpan opened 8 months ago

Nanjangpan commented 8 months ago

Is there an existing issue for this?

Describe the bug

After generating with gpu index, the following error occurs when query with the collection.query command in pymilvus.

ERROR:pymilvus.decorators:RPC error: [query], <MilvusException: (code=1, message=failed to search/query delegator 18 for channel by-dev-rootcoord-dml_11_447378442453326967v1: fail to Query, QueryNode ID = 18, reason=worker(18) query failed: => failed to get vector, not implemented)>, <Time:{'RPC start': '2024-02-13 15:44:10.161790', 'RPC error': '2024-02-13 15:44:10.833928'}>

environment is below

current milvus version : 2.3.7-gpu
pymilvus version : 2.3.6 & 2.2.12
index params :   {
        "index_type": "GPU_IVF_FLAT",
        "metric_type": "L2",
        "params": {"nlist": 128},
    }

query code : 
    expr = f"pk in [0, 1, 2]"
    output_fields = [ID_FIELD_NAME, EMBEDDING_FIELD_NAME]

    result = collection.query(
        expr=expr,
        offset=0,
        limi=10,
        output_fields=output_fields,
    )

Expected Behavior

solve this error

Steps/Code To Reproduce behavior

index params :   {
        "index_type": "GPU_IVF_FLAT",
        "metric_type": "L2",
        "params": {"nlist": 128},
    }

query code : 
    expr = f"pk in [0, 1, 2]"
    output_fields = [ID_FIELD_NAME, EMBEDDING_FIELD_NAME]

    result = collection.query(
        expr=expr,
        offset=0,
        limi=10,
        output_fields=output_fields,
    )

Environment details

- Hardware/Softward conditions (OS, CPU, GPU, Memory): centos, enough
- Method of installation (Docker, or from source): milvusdb/milvus:v2.3.7-gpu
- Milvus version (v0.3.1, or v0.4.0): 2.3.7
- Milvus configuration (Settings you made in `server_config.yaml`):

Anything else?

No response

XuanYang-cn commented 7 months ago

@Nanjangpan This is by designed, wil be supported later but not on our scheduler now.