-
### System Info
Sample Docker Compose File
```
embedding:
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.0
platform: linux/amd64
volumes:
- embed_data:/data
…
-
### Bug Details
**Describe the bug**
When I upload documents, they are processed up to the point when they're added to the embeddings-queue. Then they're stuck there:
![image](https://github.com/mi…
-
## Description
We use CLIP for product recommendations in e-commerce. By generating two vectors (image + name) and then adding the concatenated result to the TS embedding field, we get more accurat…
-
Is there a plan to incorporate image embeddings along with OCR and metadata-based retrieval? Utilizing the CLIP model from Candle to generate image embeddings could provide clearer context and improve…
-
Great work, thank you to you and your team!
I have some questions:
- When will the smaller model you mentioned in the paper be released and what are typical use cases for the large vs. small model…
-
### Question
I noticed that model.generate can directly get the output, but how to get the image embeddings and text embeddings?
-
Exploration of [model explainability techniques](https://captum.ai/) using the prediction capabilities of the CEFAS model, in complement to using it as a source of embeddings.
E.g. we take the imag…
-
I am a student in Toronto learning about multimodal models and multimodal retrieval.
Can embeddings be extracted from your models ?
I would like to compare retrieval results from your model to CLIP.…
-
### What happened?
Hi
my sample code and output are below. I get error as shown in the output for cohere embeddings, it works normally for sentence transformers.
i will appreciate any help.
…
-
### Connector Name
destination-weaviate
### Connector Version
0.2.19
### What step the error happened?
Configuring a new connector
### Relevant information
During mapping of source to destinati…