-
[[Open issues - help wanted!]](https://github.com/vllm-project/vllm/issues/4194#issuecomment-2102487467)
**Update [11/18] - In the upcoming months, we will focus on performance optimization for mul…
-
How to train the model? I tried but it could not converge. Thank you very much if you can tell the concrete details.
-
## Motivation
### Background
To provide more control over the model inputs, we currently define two methods for multi-modal models in vLLM:
- The **input processor** is called inside `LLMEngi…
-
Error occurred while running demo.ipynb in InternVideo2's multi_modality demo.
I installed packages according to requirements.txt.
```
ModuleNotFoundError
Traceback (most …
-
Is there any timeline for when the 6B Stage-2 pretrained models will be released on Huggingface? In the model zoo (https://github.com/OpenGVLab/InternVideo/blob/main/InternVideo2/multi_modality/MODEL_…
-
### Prerequisite
- [X] I have searched [Issues](https://github.com/open-mmlab/mmdetection3d/issues) and [Discussions](https://github.com/open-mmlab/mmdetection3d/discussions) but cannot get the expec…
-
While Prompt Flow already offers impressive flexibility regarding data types within the overall stack, encompassing flow input/output and node intermediate data, it is crucial to address the limited s…
-
Hello,
I would like to ask if the input images are CT and PET, as well as mask, how should I use the code below? Thank you.
```
# with mask
data = {'image': img, 'mask': lbl}
aug_data = aug(**d…
-
This page is accessible via [roadmap.vllm.ai](https://roadmap.vllm.ai)
### Themes.
As before, we categorized our roadmap into 6 broad themes: broad model support, wide hardware coverage, state of…
-
For anyone who wants to contribute (add features, report bugs, or just simply discuss and learn), join our [Discord](https://discord.gg/d9vcY7PA8Z) 👋
Or you can just comment here for open discussion…