-
I have a dataset in this format:
# Assuming data_set is a list of dictionaries
ragas_data = [
{
"question": entry["text_vector_1"],
"answer": entry["text_vector_2"],
…
-
at dev brach, init pipeline as following code, but the output image is covered with a red layer
`# brushnet-based version
unet = UNet2DConditionModel.from_pretrained(
"stable-diffusion-…
-
jina-embeddings-v3 is a multilingual multi-task text embedding model designed for a variety of NLP applications. Based on the [Jina-XLM-RoBERTa architecture](https://huggingface.co/jinaai/xlm-roberta-…
-
[ ] I checked the [documentation](https://docs.ragas.io/) and related resources and couldn't find an answer to my question.
**Your Question**
I create a subclass from baseragasembeddins and I want…
-
background-1 | File "/usr/local/lib/python3.11/site-packages/llama_index/core/indices/utils.py", line 138, in embed_nodes
background-1 | new_embeddings = embed_model.get_text_embedding_batch(…
-
[ ] I checked the [documentation](https://docs.ragas.io/) and related resources and couldn't find an answer to my question.
**Your Question**
I wrote this code and I get the error:
The api_key …
-
I have been observing low cosine similarity scores for InternVideo2 video embeddings compared to relevant text caption embeddings. In some cases, the scores are even negative. I am not sure if I am mi…
-
Thank you for your elegant work! I am wondering if InternV2 has the same function like InternVL-C in the previous versions that support cross-modal feature retrieval, or how I can get aligned embeddin…
-
Hi, I have similar problem to https://github.com/microsoft/CLAP/issues/24, but I'm using shorter audio than 6 seconds.
MWE:
```python
from msclap import CLAP
import torch
import subprocess
…
-
https://huggingface.co/collections/cl-nagoya/ruri-japanese-general-text-embeddings-66cf1f3ee0c8028b89d85b5e