-
### 🚀 The feature, motivation and pitch
Consider a scenario where a large model is deployed in the cloud, and the application is deployed on a computationally limited embedded device.
If we want t…
-
### Feature request
Is it possible to rum multimodal LLMs like Qwen VL or LLaVa 1.5 using openllm?
### Motivation
_No response_
### Other
_No response_
-
Do you have any plans to support multimodal LLMs, such as MiniGPT-4/MiniGPT v2 (https://github.com/Vision-CAIR/MiniGPT-4/) and LLaVA (https://github.com/haotian-liu/LLaVA/)? That would be a significan…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
I am Referring to this example: https://www.llamaindex.ai/blog/multimodal-rag-for-advanc…
-
Request to Add Multimodal LLMs in unsloth
Rivising my Previous Issue: https://github.com/unslothai/unsloth/issues/376
-
Is it possible to merge multimodal LLMs?
For example, could Llava and CodeLlama be merged? It might be beneficial for some software engineering tasks
-
I want it to work on my existing project with multiple code files and with nested folders and multimodality with local models like ollama and lite-llm
-
### Describe your use-case.
There are multiple simple models used in this repository: Blip, Clip and WD-taggers. However, when it comes to detailed description, they are all dwarfed by modern multi…
-
Paper : [https://arxiv.org/pdf/2406.16860](https://arxiv.org/pdf/2406.16860)
Website : [https://cambrian-mllm.github.io](https://cambrian-mllm.github.io)
Code : [https://github.com/cambrian-mllm/cam…
-
version: TensorRT-LLM 0.10.0
the official script(TensorRT-LLM/examples/multimodal/run.py) use same prompt repeat to form a batch. but if I use different prompts to form a batch, the result is incorre…