milvus-io / milvus

A cloud-native vector database, storage for next generation AI applications
https://milvus.io
Apache License 2.0
28.19k stars 2.72k forks source link

<MilvusException: (code=65535, message=empty sparse float vector row)> #32972

Open shilei4260 opened 2 months ago

shilei4260 commented 2 months ago

Is there an existing issue for this?

Environment

- Milvus version:
- Deployment mode(standalone or cluster):
- MQ type(rocksmq, pulsar or kafka):    
- SDK version(e.g. pymilvus v2.0.0rc2):
- OS(Ubuntu or CentOS): 
- CPU/Memory: 
- GPU: 
- Others:

Current Behavior

稀疏和密集向量时出现的报错https://github.com/milvus-io/pymilvus/blob/master/examples/hello_hybrid_sparse_dense.py

Expected Behavior

No response

Steps To Reproduce

No response

Milvus Log

No response

Anything else?

No response

yanliang567 commented 2 months ago

@shilei4260 which version are you running for Milvus? please offer milvus logs for investigation, thx /assign @shilei4260 /unassign

xiaofan-luan commented 2 months ago

what model you are using? random or M3?

stale[bot] commented 1 month ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. Rotten issues close after 30d of inactivity. Reopen the issue with /reopen.

xxxfzxxx commented 2 weeks ago

hi, I met a similar error. I use bm25 embedding function, I use encode_queries function: sparse_embeddings = self.bm25_ef.encode_queries([rewritten_query]) but sparse_embedding is return empty. Why is that? the bm25_ef is def bm25_ef(self): bm = BM25EmbeddingFunction(build_default_analyzer(language="zh")) bm.load("bm25_params.json") return bm. Note that my input query is "图片尺寸", I think bm25 tokenizer, aka, default_analyzer should split it to "图片" and "尺寸". I can find the ascii code for "图片" and "尺寸" in the bm25_params.json. I think the problem is the default analyzer does not tokenize my query.

xxxfzxxx commented 2 weeks ago

urgent

wxywb commented 2 weeks ago

@xxxfzxxx I'm checking this issue.

wxywb commented 2 weeks ago

@xxxfzxxx your observation is correct.

from pymilvus.model.sparse.bm25.tokenizers import build_default_analyzer
from pymilvus.model.sparse import BM25EmbeddingFunction
import jieba

analyzer = build_default_analyzer(language="zh")

corpus = [
   "在登记册上所有的图片尺寸需要保持一致"
]

# analyzer can tokenize the text into tokens
tokens = analyzer(corpus[0])
print(analyzer.tokenizer.__dict__)
print("tokens:", tokens)
tokens: ['登记册', '上', '图片尺寸', '保持一致']

The popular Chinese tokenizer project jieba used by this implementation will not split '图片尺寸' in two words. However jieba supports adjusting its vocabulary by user. You can create a new file called custom.txt

图片 10000
尺寸 10000
from pymilvus.model.sparse.bm25.tokenizers import build_default_analyzer
from pymilvus.model.sparse import BM25EmbeddingFunction
import jieba

jieba.load_userdict("./custom.txt")
analyzer = build_default_analyzer(language="zh")

corpus = [
   "在登记册上所有的图片尺寸需要保持一致"
]

# analyzer can tokenize the text into tokens
tokens = analyzer(corpus[0])
print(analyzer.tokenizer.__dict__)
print("tokens:", tokens)
tokens: ['登记册', '上', '图片', '尺寸', '保持一致']
wxywb commented 2 weeks ago

Adjusting jieba vocab cannot handle all corner cases. At lease we could have a naive method.

from pymilvus.model.sparse.bm25.tokenizers import build_default_analyzer
from pymilvus.model.sparse import BM25EmbeddingFunction

class SimpleChineseTokenizer():
    def __init__(self):
        pass

    def tokenize(self, text: str):
        return list(text)

analyzer = build_default_analyzer(language="zh")
analyzer.tokenizer = SimpleChineseTokenizer()

corpus = [
   "在登记册上所有的图片尺寸需要保持一致"
]

# analyzer can tokenize the text into tokens
tokens = analyzer(corpus[0])
print(analyzer.tokenizer.__dict__)
print("tokens:", tokens)
tokens: ['登', '记', '册', '上', '图', '片', '尺', '寸', '需', '保', '持', '致']
xxxfzxxx commented 2 weeks ago

I wonder how the milvus builtin bm25embeddingFunction will embed the unseen word in the query? From my observation, it will give nothing(None). What is the best solution if the tokens in the query does not occur in the previous bm25 tokens dict?

wxywb commented 2 weeks ago

@xxxfzxxx ,bm25 in this implementation will calculate the statistics(term frequencies, idfs) over tokenized words in documents. If a word tokenized in query not seem in documents then it would contribute nothing to the relevance score. If you have such concerns, I think the best strategy is tokenizing Chinese sentences into single characters. For English, you need to tokenize them into subwords(like GPT's BPE tokens).

wxywb commented 2 weeks ago

hi, I met a similar error. I use bm25 embedding function, I use encode_queries function: sparse_embeddings = self.bm25_ef.encode_queries([rewritten_query]) but sparse_embedding is return empty. Why is that? the bm25_ef is def bm25_ef(self): bm = BM25EmbeddingFunction(build_default_analyzer(language="zh")) bm.load("bm25_params.json") return bm. Note that my input query is "图片尺寸", I think bm25 tokenizer, aka, default_analyzer should split it to "图片" and "尺寸". I can find the ascii code for "图片" and "尺寸" in the bm25_params.json. I think the problem is the default analyzer does not tokenize my query.

Do you mean you get a zero-size sparse embedding or a sparse embedding all with zeros(size equals your len(idf)).

xiaofan-luan commented 2 weeks ago

if corpus don't have this word, you will get 0 in this dimension. becasue no corpus will match this word

xxxfzxxx commented 2 weeks ago

hi, I met a similar error. I use bm25 embedding function, I use encode_queries function: sparse_embeddings = self.bm25_ef.encode_queries([rewritten_query]) but sparse_embedding is return empty. Why is that? the bm25_ef is def bm25_ef(self): bm = BM25EmbeddingFunction(build_default_analyzer(language="zh")) bm.load("bm25_params.json") return bm. Note that my input query is "图片尺寸", I think bm25 tokenizer, aka, default_analyzer should split it to "图片" and "尺寸". I can find the ascii code for "图片" and "尺寸" in the bm25_params.json. I think the problem is the default analyzer does not tokenize my query.

Do you mean you get a zero-size sparse embedding or a sparse embedding all with zeros(size equals your len(idf)).

Yes, I print the "图片尺寸“ sparse embedding and it output nothing. It should give me a csr matrix right?

wxywb commented 2 weeks ago

hi, I met a similar error. I use bm25 embedding function, I use encode_queries function: sparse_embeddings = self.bm25_ef.encode_queries([rewritten_query]) but sparse_embedding is return empty. Why is that? the bm25_ef is def bm25_ef(self): bm = BM25EmbeddingFunction(build_default_analyzer(language="zh")) bm.load("bm25_params.json") return bm. Note that my input query is "图片尺寸", I think bm25 tokenizer, aka, default_analyzer should split it to "图片" and "尺寸". I can find the ascii code for "图片" and "尺寸" in the bm25_params.json. I think the problem is the default analyzer does not tokenize my query.

Do you mean you get a zero-size sparse embedding or a sparse embedding all with zeros(size equals your len(idf)).

Yes, I print the "图片尺寸“ sparse embedding and it output nothing. It should give me a csr matrix right?

please show me your full code

xxxfzxxx commented 1 week ago

`dense_embeddings = [self.bgem3_model.get_embedding([query])[0]['dense_vecs']] rewritten_query = self.get_query_rewrite(query) sparse_embeddings = self.bm25_ef.encode_queries([rewritten_query]) col = Collection(name=collection_name) col.load() search_param_dense = { "data": dense_embeddings, "anns_field": "dense_vector", "param": { "metric_type": "COSINE", "params": {"nprobe": 10} }, "limit": 100
} search_param_sparse = { "data": sparse_embeddings, "anns_field": "sparse_vector", "param": { "metric_type": "IP", "params": {"nprobe": 10} }, "limit": 100 # TODO } request_dense = AnnSearchRequest(search_param_dense) request_sparse = AnnSearchRequest(search_param_sparse)

    reqs = [request_dense, request_sparse]
    weighted_rerank = WeightedRanker(dense_weight, 1 - dense_weight)

    res = col.hybrid_search(
        reqs,
        weighted_rerank,
        limit=retrieved_cnt,
        output_fields=['doc_id', 'text', 'metadata']
    )`

hi, I met a similar error. I use bm25 embedding function, I use encode_queries function: sparse_embeddings = self.bm25_ef.encode_queries([rewritten_query]) but sparse_embedding is return empty. Why is that? the bm25_ef is def bm25_ef(self): bm = BM25EmbeddingFunction(build_default_analyzer(language="zh")) bm.load("bm25_params.json") return bm. Note that my input query is "图片尺寸", I think bm25 tokenizer, aka, default_analyzer should split it to "图片" and "尺寸". I can find the ascii code for "图片" and "尺寸" in the bm25_params.json. I think the problem is the default analyzer does not tokenize my query.

Do you mean you get a zero-size sparse embedding or a sparse embedding all with zeros(size equals your len(idf)).

Yes, I print the "图片尺寸“ sparse embedding and it output nothing. It should give me a csr matrix right?

please show me your full code

wxywb commented 1 week ago

I wonder how you get the None sparse embedding. https://github.com/milvus-io/milvus-model/blob/d812c9a84f2c530919ddffec8bf4024cce841e6b/milvus_model/sparse/bm25/bm25.py#L130 you can get a csr_array even you have an empty self.idf.

xxxfzxxx commented 1 week ago

I wonder how you get the None sparse embedding. https://github.com/milvus-io/milvus-model/blob/d812c9a84f2c530919ddffec8bf4024cce841e6b/milvus_model/sparse/bm25/bm25.py#L130 you can get a csr_array even you have an empty self.idf.

My bad, I check the type of the sparse_embeddings print(">>>>", type(sparse_embeddings), sparse_embeddings) and the output is >>>> <class 'scipy.sparse._csr.csr_matrix'> meaning that sparse embedding is an csr_matrix. Since all values in the matrix are zeros so it does not print anything.

Then, how do I search for it, can you tell me how to update my hybrid search? `search_param_dense = { "data": dense_embeddings, "anns_field": "dense_vector", "param": { "metric_type": "COSINE", "params": {"nprobe": 10} }, "limit": 100
} search_param_sparse = { "data": sparse_embeddings, "anns_field": "sparse_vector", "param": { "metric_type": "IP", "params": {"nprobe": 10} }, "limit": 100 # TODO } request_dense = AnnSearchRequest(search_param_dense) request_sparse = AnnSearchRequest(search_param_sparse)

    reqs = [request_dense, request_sparse]
    weighted_rerank = WeightedRanker(dense_weight, 1 - dense_weight)

    res = col.hybrid_search(
        reqs,
        weighted_rerank,
        limit=retrieved_cnt,
        output_fields=['doc_id', 'text', 'metadata']
    )`

raise MilvusException(status.code, status.reason, status.error_code) pymilvus.exceptions.MilvusException: <MilvusException: (code=65535, message=fail to search on QueryNode 33: worker(33) query failed: Assert "size > 0" at /go/src/github.com/milvus-io/milvus/internal/core/src/common/Utils.h:227 => Sparse row data should not be empty)>

wxywb commented 1 week ago

@xxxfzxxx Your sparse embeddings seem to have zero length. Using following code to verify this.

print(sparse_embeddings.toarray().shape)

I think it will be a 0-length sparse embedding. Then you need to verify your bm25 idf, by.

print('elements in idf:', len(bm25_ef.idf))

It shouldn't be empty if you have fitted your corpus.

xxxfzxxx commented 1 week ago

(1, 18722) elements in idf: 18722

xxxfzxxx commented 1 week ago

Note that the sparse_vector schema is FieldSchema(name="sparse_vector", dtype=DataType.SPARSE_FLOAT_VECTOR)

sparse_index = {"index_type": "SPARSE_INVERTED_INDEX", "metric_type": "IP"}

xxxfzxxx commented 1 week ago

@xxxfzxxx Your sparse embeddings seem to have zero length. Using following code to verify this.

print(sparse_embeddings.toarray().shape)

I think it will be a 0-length sparse embedding. Then you need to verify your bm25 idf, by.

print('elements in idf:', len(bm25_ef.idf))

It shouldn't be empty if you have fitted your corpus.

my query's sparse embeddings are not zero length. it is actually a all zero csr matrix.

wxywb commented 1 week ago

Milvus's sparse embedding requires the number of non-zeros (nnz) in the sparse embedding(both the doc and the query) to be greater than 0. The users need to check the nnz of every row of the sparse embeddings before inserting/searching. When it equals zero, you need to fall back on dense retrieval.

sparse_embeddings.nnz # nnz of all rows of sparse_embeddings if sparse_embeddings contains multiple rows.
sparse_embeddings[0].nnz # nnz of the first row of sparse_embeddings.

The reason behind this is that as IP is the only available distance metric, an embedding with 0 non zero values will have a 0 IP distance to any other embeddings, thus a distance judgement cannot be made.

wxywb commented 1 week ago

It seems that for the BM25EmbeddingFunction, there is a risk of generating an all-zero query sparse embedding, which is not supported by Milvus.

xxxfzxxx commented 1 week ago

I saw that https://github.com/milvus-io/milvus-model/blob/main/milvus_model/sparse/bm25/bm25.py line 194 has a json file to download(https://github.com/milvus-io/pymilvus-assets/releases/download/v0.1-bm25v1/bm25_msmarco_v1.json). But I cannot find it anywhere. Can you provide a chinese version?

wxywb commented 1 week ago

It will download this file where you executed the code. Currently I only fitted the BM25EmbeddingFunction on MS MARCO dataset for English language. If you can fit it on your dataset, you will get better results. If you want a pretrained sparse embedding function for Chinese. I strongly recommend you to test this https://milvus.io/docs/embed-with-bgm-m3.md.