-
### Anything you want to discuss about vllm.
Can the current benchmark_serving.py be used with Multimodal LLM (llava) and image input? The existing code send the request in the following format in ba…
-
Automating GUI-based Test Oracles for Mobile Apps (MSR'24)
A Study of Using Multimodal LLMs for Non-Crash Functional Bug Detection in Android Apps (https://arxiv.org/pdf/2407.19053)
AUITestAgent: …
-
### Your current environment
I use vllm to serve like
`vllm serve OpenGVLab/InternVL2-8B --max-model-len 4096 --trust-remote-code --limit-mm-per-prompt image=2`
but it raise error
Process Spa…
-
Hi! I've met some problems, during training the octavius, The token_acc remains 0.0. I wonder if the retrained LLM model simply vicuna 13B(which is a pure language model) or a multimodal aligned model…
-
![img_v3_02ct_364c6b00-4aa3-4f66-9d9b-ac908e08ba6g](https://github.com/user-attachments/assets/e91bc4d4-69b5-4c82-bf84-44ea0af912ae)
ipex-llm: 2.1.0b20240714
transformers: 4.41.2
Driver: 32.0.101…
-
*Note*: If you have a model or program that is not supported yet but should be, please use the program coverage template.
## 🐛 Bug
### To Reproduce
I was trying to run NeVA by following […
-
### Describe the bug
I am having issues with continuing a conversation with a Multimodal agent after that a function call has been used.
The goal that I have is to save a report that a first Mul…
-
### Your current environment
The output of `python collect_env.py`
```text
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch…
-
### Model description
https://github.com/ModelTC/lightllm/pull/266
Will there be vision llm support in Lorax soon?
### Open source status
- [X] The model implementation is available
- [X] The mo…
-
## ❓ General Questions
hi,all
I'm trying to port Microsoft's Florence-2-large model to mlc recently. It seems to be able to run initially, but I have a problem. Multimodal LLM models usually have …