-
### Feature request
image embeddings, and audio embeddings have currently insufficent coverage in the openai api server.
### Motivation
It would be great to have a test testing this end-to-end…
-
Whereas /v1/chat/completions succeeds , the same body /v1/embeddings returns a 404 for a similar body
I was hoping to get the embedding output vector for an image that uses the openbmb/MiniCPM-V-2…
-
### Feature request
It is on the road map to have images embeddings models?
### Motivation
This is very useful since there are many VLLMs coming out.
### Your contribution
Any thing that is neede…
-
### Bug Description
While Sentence Transformer enables using embedding models such as CLIP, HuggingFaceEmbedding does not work.
### Version
0.10.67.post1
### Steps to Reproduce
from…
-
Thank you for your elegant work! I am wondering if InternV2 has the same function like InternVL-C in the previous versions that support cross-modal feature retrieval, or how I can get aligned embeddin…
-
### Describe the issue
There are couple of places that use hyperlink of 'https://docs.trychroma.com/embeddings' while this web is not working anymore. like it is referenced in 'https://microsoft.gith…
-
There's a streamlit UI in here (borrowed from the one for [language model embeddings search](https://github.com/NERC-CEH/embeddings_app/)
* Uses a collection of image embeddings from this [fork of …
-
For SAM1 I could do:
`
if exists:
logging.info(f"Embeddings already exist. Loading from: {embeddings_path}")
model.load_image_embedding(embeddings_path)
e…
-
### Question
how can I get the image and text embeddings for another task, and what size are these embeddings? Here is what I know:
Here is the vision output shape: torch.Size([1, 576, 4096]
Here i…
-
This is a great way to check the effectiveness of our SAE ablations
See Gytis's post:
https://www.lesswrong.com/posts/Quqekpvx8BGMMcaem/interpreting-and-steering-features-in-images
Use Kandinsk…