-
Hi, I'd like to experiment with multimodal language models that can handle both images and text as input. Is there a way to input an image.
My ModelFile looks like this
```
FROM ./my_model.gguf…
-
For GPT-4, image inputs are still in [limited alpha](https://openai.com/research/gpt-4#:~:text=image%20inputs%20are%20still%20in%20limited%20alpha).
For GPT-3.5, it would be great to see LangChain…
-
### Feature Description
I would like to use the latest model from OpenAI: `gpt-4-0125-preview`, published yesterday.
You can see it announced in this [blog post](https://openai.com/blog/new-embeddin…
-
### What happened?
Hi Team,
Love the project! First time contributing.
I found a minor inconsistency in how input is handled across the OpenAI and AWS Embeddings API which results in a incorr…
-
Hi, I'm wondering if the select function will support some multimodal models later? E.g., BLIP-2 in transformers.
-
Conferences to review:
- [x] https://dl.acm.org/action/doSearch?target=browse-proceedings-specific&ConceptID=118222&ConceptID=120670
- [x] https://dl.acm.org/doi/proceedings/10.1145/3576050
- [ ] h…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
I am using the following code to create a MultiModalVectorStoreIndex only from images:
…
-
(discussion started in #410 and email)
----
I started looking at the chat vs. non-chat interfaces, beginning with OpenAI. These days, all OpenAI API calls are supposed to go through the chat end…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
I would like to know, if I have a well formatted markdown document, with pictures insert…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
I was trying out Image to Image Retrieval, can anyone suggest how can I use local LLM in…