milvus-io / milvus

A cloud-native vector database, storage for next generation AI applications
https://milvus.io
Apache License 2.0
28.4k stars 2.74k forks source link

[Feature]: Support Double, Float16 and BF16 vectors #22837

Open xiaofan-luan opened 1 year ago

xiaofan-luan commented 1 year ago

Is there an existing issue for this?

Is your feature request related to a problem? Please describe.

There are many different vector types based on models. So far what we received most is double, float16, BF16, double and BF16 is on top priority. Anyone interested on it please help

Describe the solution you'd like.

No response

Describe an alternate solution.

No response

Anything else? (Additional Context)

No response

jon-chuang commented 1 year ago

/assign

jon-chuang commented 1 year ago

@xiaofan-luan could I ask if it is wrong to convert the embedding to a float32, which I think has better numerical performance on most CPU unless hardware support exists?

Or, is the purpose of this issue to support storage of such formats assuming that the compute nodes have the correct compute (e.g. GPU or the right Xeon chipset) to handle operations in those datatypes?

If so, do we need to implement fallback by e.g. emulation or casting when the appropriate compute support is missing? Pytorch handles by autocasting.

jon-chuang commented 1 year ago

Duplicate: https://github.com/milvus-io/milvus/issues/22132

jon-chuang commented 1 year ago

btw, bfloat16 does not exist on faiss: https://github.com/facebookresearch/faiss/wiki/How-to-make-Faiss-run-faster, and I believe not in Annoy or HNSWLib either

But it supports float16 and we can compile it back in: https://github.com/milvus-io/milvus/issues/2828

jiaoew1991 commented 1 year ago

Welcome @jon-chuang , You can implement float16 first, we can discuss about bf16 later. 😄

jiaoew1991 commented 1 year ago

/unassign @jon-chuang

jiaoew1991 commented 1 year ago

We can break down the steps into the following:

qyJasmine commented 6 months ago

Hi, I have a question that, does this issue(supporting float16 in Milvus) means all vector indices in Milvus will support float16 datatype, and using float16 can lead to significant decline in memory cost? We are using DiskANN now, and we hope to use float16 type in DiskANN.

xiaofan-luan commented 6 months ago

Hi, I have a question that, does this issue(supporting float16 in Milvus) means all vector indices in Milvus will support float16 datatype, and using float16 can lead to significant decline in memory cost? We are using DiskANN now, and we hope to use float16 type in DiskANN.

For diskann, it is already under heavy quantization and use float16 won't help on reduce your memory cost.

cqy123456 commented 5 months ago

Knowhere related issue: https://github.com/zilliztech/knowhere/issues/287

alexanderguzhva commented 2 months ago

Faiss support for BF16 is getting closer:

chasingegg commented 1 month ago

Also, support binary vectors