-
**Details of model being requested**
- Model name: bge-large-en-v1.5
- Source repo link: https://huggingface.co/BAAI/bge-large-en-v1.5
- Model use case: Vector embedding
-
### System Info / 系統信息
cuda:12.4
use pip to install
### Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?
- [ ] docker / docker
- [X] pip install / 通过 pip install 安装
- [ ] installation …
-
### What is the issue?
ollama pull quentinz/bge-large-zh-v1.5
when start quentinz/bge-large-zh-v1.5:latest,raise errors
ollama run quentinz/bge-large-zh-v1.5:latest
Error: "quentinz/bge-large…
-
Requesting support for BAAI/bge-m3. Thanks.
-
https://huggingface.co/BAAI/bge-m3
bge-m3 model has 3 versions: dense, sparse, multi_vector.
However, sentence-transformers only allow dense versions, so I think it would be great if mteb makes it…
-
hi,
i want to run the code, but find: bm25 return none, i find the code:
def wiki_search(query, size=10):
pass
does it mean that the function is not implement? where can I find it?
-
안녕하세요. 임베딩 모델 랭킹 측정 및 관리해주셔서 감사합니다. KoE5와 bge-m3-korean이 추가된 것을 확인하였습니다.
최근 bge-m3-ko 라는 임베딩 모델도 추가되어서 관심을 집중받고있기에 비교 결과를 알고 싶습니다.
링크입니다.
https://huggingface.co/dragonkue/BGE-m3-ko
그리고 현재 …
-
作者,您好!非常感谢你们能够开源这么棒的模型,我现阶段想要复现bge-m3的微调,目前手上有的资源是8张 V100(32G)。具体的执行语句是:
`torchrun --nproc_per_node 8 \
-m FlagEmbedding.BGE_M3.run \
--output_dir ../../output/finetune/firstModel \
--model_name_o…
-
您好,我目前在做BGE-M3 Embedding微调,我目前发现有两种数据构造方式,一种是[LlamaIndex微调](https://www.llamaindex.ai/blog/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)
```json
{
"queries": {
"68eb2…
-
我把下载好模型放到了/home/project/CRUD_RAG/sentence-transformers/bge-base-zh-v1.5/目录下,出现以下报错,请问应该放到哪里?
![image](https://github.com/user-attachments/assets/559a4cc5-8d6f-4526-9372-70ccef7268f4)
![image](http…