ggerganov / llama.cpp

LLM inference in C/C++
MIT License
65.63k stars 9.42k forks source link

Model Request for BAAI/bge-m3 (XLMRoberta-based Multilingual Embedding Model) #6007

Closed mofanke closed 2 months ago

mofanke commented 6 months ago

Prerequisites

Please answer the following questions for yourself before submitting an issue.

Feature Description

Supporting a multilingual embedding. https://huggingface.co/BAAI/bge-m3

Motivation

There are some differences between multilingual embeddings and BERT

Possible Implementation

sorry, no idea. I tried , seems model arch is same as bert ,but tokenizer is XLMRobertaTokenizer , not bertTokenizer

RoggeOhta commented 5 months ago

Also request this model to be supported.

vonjackustc commented 4 months ago

Tried to support it, use BertModel & SPM tokenizer. https://huggingface.co/vonjack/bge-m3-gguf

Tested cosine similarity between "中国" and "中华人民共和国": bge-m3-f16: 0.9993230772798457 mxbai-embed-large-v1-f16: 0.7287733321223814

vuminhquang commented 4 months ago

I got error when using with langchain "terminate called after throwing an instance of 'std::out_of_range'"

ciekawy commented 4 months ago

same here with llama.cpp, the full error:

libc++abi: terminating due to uncaught exception of type std::out_of_range: unordered_map::at: key not found

ciekawy commented 4 months ago

the _bert version does not crash, but the the embeddings do not seem to have any sense...

ciekawy commented 4 months ago

also tried to follow instructions on https://github.com/PrithivirajDamodaran/blitz-embed but after converting to gguf, getting error:

llama_model_quantize: failed to quantize: key not found in model: bert.context_length

ciekawy commented 4 months ago

@vonjackustc can you share params you used with llama.cpp?

github-actions[bot] commented 2 months ago

This issue was closed because it has been inactive for 14 days since being marked as stale.

theta-lin commented 2 months ago

@vonjackustc Same issue with @vuminhquang and @ciekawy when running it using Ollama.

It appears to be that embedding a text containing \n (newline character) would result in the following error:

terminate called after throwing an instance of 'std::out_of_range'
  what():  _Map_base::at

This issue is also brought up here: https://huggingface.co/vonjack/bge-m3-gguf/discussions/3.

BTW, as an alternative, I am using Text Embeddings Inference to run BAAI/bge-m3 now.

ciekawy commented 2 months ago

For embeddings I'd say most of the time it's safe if not desired to remove newlines. This may be not so obvious for longer texts but still...

Huoxu69 commented 2 months ago

Tried to support it, use BertModel & SPM tokenizer. https://huggingface.co/vonjack/bge-m3-gguf

Tested cosine similarity between "中国" and "中华人民共和国": bge-m3-f16: 0.9993230772798457 mxbai-embed-large-v1-f16: 0.7287733321223814

May I ask how exactly this is accomplished?