-
### Link to the documentation pages (if available)
https://github.com/patrickjohncyh/fashion-clip
https://huggingface.co/patrickjohncyh/fashion-clip
### How could the documentation be improved?…
txhno updated
3 weeks ago
-
Currently for mask generation, a new SAM embedding is produced for each input image. But the same embedding can be reused over multiple mask generation steps. So we should store the SAM embeddings onc…
-
## Steps
1. Drag Dataset Selection images slider to 2
2. Click Embeddings Compute button
3. Python Error: `ValueError: n_components=3 must be between 0 and min(n_samples, n_features)=2 with svd_so…
-
## ❓ Questions and Help
We are using Pytorch XLA w/ TPU to train a multi-modal language models.
We can make most of the code, such as image encoding and the forward pass in the LLM backbone, in a …
-
According to https://docs.google.com/presentation/d/1AY3QV1N_hoi9aXI1r8QTqrNmDK9LyorgJDQMPWb8hBo/edit#slide=id.g2e696416940_0_144, we have to add the temporal/frame encoding to IMAGE-based modality em…
-
What is the multi-box prompt strategy?
Is it directly cal meaning of multiple prompts?
I found that it might be directly calculating the meaning of multiple prompts.
In the section "Generic Vis…
-
I think this feature would make a lot of sense.
I can add an `"images"` field in requests to `/api/generate` when using a multi-modal model, why can't I do the same for requests to `/api/embeddings`?…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain.js documentation with the integrated search.
- [X] I used the GitHub search to find a …
-
### System Info
Image: v1.2 CPU
Model used: jinaai/jina-embeddings-v2-base-de
Deployment: Docker / RH OpenShift
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An officiall…
-
### System Info
Sample Docker Compose File
```
embedding:
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.0
platform: linux/amd64
volumes:
- embed_data:/data
…