-
I am using a chat ML template like this to format prompts:
```
def format_conversation(examples):
conversations = examples['conversation']
texts = []
for convo in conversations:
…
-
### Question
how can I get the image and text embeddings for another task, and what size are these embeddings? Here is what I know:
Here is the vision output shape: torch.Size([1, 576, 4096]
Here i…
-
### 🥰 需求描述
希望可以支持text-embeddings-inference 的 embedding和rerank
https://github.com/huggingface/text-embeddings-inference
### 🧐 解决方案
增加可配置项,实现向量化模型和rerank接入
### 📝 补充信息
_No response_
l0g1n updated
1 month ago
-
My dataset is creating an index of 10GB using Qdrant in langchain. I am creating both dense and sparse vectors. I am having issues with slow performance both creating the vectorstore ( took almost 6 d…
-
按SDXL.md完成模型下载后,运行sdxl.py文件时对输入文本做了77的截断,后将stable-diffusion/text_encoder/config.json中"max_position_embeddings"改为248,报错如下
RuntimeError: Error(s) in loading state_dict for CLIPTextModel:
size …
-
ghcr.io/huggingface/text-embeddings-inference:89
-
ghcr.io/huggingface/text-embeddings-inference:89-1.5
-
Hi! Thank you for your outstanding work!
I have been working on improving the LangBridge approach, and I noticed your paper referenced it. As you discussed, LangBridge uses soft prompts generated b…
-
Just realized i get the below warning with Salesforce/blip-image-captioning-large ; i think i already ran results for it, but they're probably random in that case; maybe someone could check the result…
-
用的示例代码
```
from FlagEmbedding import FlagAutoModel
import argparse
import json
import random
import numpy as np
import faiss
from tqdm import tqdm
from FlagEmbedding import FlagModel
if …