-
TRL SFTTrainer supports LLaVA (Large Language and Vision Assistant) as described in the following link [Vision Language Models Explained](https://huggingface.co/blog/vlms)
Is there any plan to rele…
-
### Feature request
In [`Transformers 4.36`](https://github.com/huggingface/transformers/releases/tag/v4.36.0), we started adding native support of [torch.nn.functional.scaled_dot_product_attention](…
-
### Your current environment
docker latest for 0.5.3
```
docker pull vllm/vllm-openai:latest
docker run -d --restart=always \
--runtime=nvidia \
--gpus '"device=1"' \
--shm-size…
-
### 🚀 The feature, motivation and pitch
InternVL2 is currently the most powerful open-source Multimodal Large Language Model (MLLM). The InternVL2 family includes models ranging from a 2B model, suit…
-
I'm wondering what causes this error?
Do I have to set --version phi3 during pre-training stage? I use --version plain in pre-train stage and --version phi3 in fine-tune stage. Is this the correct s…
-
### Your current environment
Two docker containers based on images built from vllm source **3de6e6a3** and **3f3b6b21**
### 🐛 Describe the bug
I passed the same model Phi-3-vision-128k-instru…
-
### Your current environment
vllm version:
```
e9de9dd551ac595a9f3825fcd1507deceef4f332
```
```text
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA use…
-
### System Info
python==3.10.14
transformers==4.42.3
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officia…
-
Looking for onnx version of microsoft/Phi-3-vision-128k-instruct
Onnx files don't seem to be on Hugging Face.
Also, Does the onnxruntime-genai support multiple GPUs on the same PC?
Thanks,
Ash
AshD updated
3 months ago
-
While our [draft charter](https://www.w3.org/2023/03/proposed-webmachinelearning-charter.html) says that the group:
> priority on building blocks required by well-known model architectures such as re…