-
due to llama.cpp not support phi3-vision i'm stuck into. That's very complicated than that personal developers can.
-
## ❓ General Questions
Hello,
I'm trying to build Android app with local customized (not on HuggingFace) **llava** model. I referred to those below guides:
https://llm.mlc.ai/docs/compilation…
-
https://huggingface.co/microsoft/Phi-3-vision-128k-instruct
-
**Describe the bug**
The official VLLM Wiki claims support for Phi-3-Vision (microsoft/Phi-3-vision-128k-instruct, Phi3VForCausalLM), but when I try to run it I get the following error:
`[rank0]: Tr…
-
### Describe the documentation issue
I am trying to follow the instructions https://onnxruntime.ai/docs/genai/tutorials/phi3-v.html. I am trying Run with NVIDIA CUDA
approach.
I am able to setup …
-
Running this script:
```python
import mlx.core as mx
from mlx_vlm import load, generate
import os
from pathlib import Path
# model_path = "mlx-community/llava-1.5-7b-4bit"
#model_path = "…
-
### Your current environment
llm = VLLMOpenAI(
openai_api_key="EMPTY",
openai_api_base=api_base,
model_name="microsoft/Phi-3-vision-128k-instruct",
model_kwargs={"stop": ["."]…
-
Please let us know what model architectures you would like to be added!
**Up to date todo list below. Please feel free to contribute any model, a PR without device mapping, ISQ, etc. will still be …
-
HI, I am trying to add the LLM functionality to the android compatible devices. Can anyone tell me how to build for the android? Also, is any help on the front of Multimodal LLM deployment on mobile w…
-
Seems that microsoft/Phi-3.5-vision-instruct not working with below config
```
torchrun --nproc_per_node=1 \
src/training/train.py \
--lora_enable True \
--vision_lora True \
-…