-
### What is the issue?
hi, i'm working on a llm project that uses phi3-vision. lately pushed into ollama but very bad responses i'm getting than i get from colab notebook. on colab phi3-vision can re…
-
Lots of people have asked to have a local version working that is not reliant on OpenAI.
So far OSS models have seemed to not be good enough but [Phi-3](https://huggingface.co/microsoft/Phi-3-visio…
-
Thanks for the conversion code for phi3-vision.
I'm making a app for concurrent requests that need continuous batching. Can I inference phi3-vision with batchsize larger than 1 ( I mean in onnx mode…
-
Hello. I am trying to replicate the baseline results for Target unlearning from the paper, however I have been getting consistently worse results for both LLama3-8B-instruct and Phi-3 Mini-4K-Instruct…
-
Hello after training Qlora I got produce checkpoint under
```
ll output/lora_vision_test/
adapter_config.json
adapter_model.safetensors
checkpoint-178/
config.json
non_lora_state_dict.bin
…
-
### Your current environment
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubunt…
-
due to llama.cpp not support phi3-vision i'm stuck into. That's very complicated than that personal developers can.
-
https://huggingface.co/microsoft/Phi-3-vision-128k-instruct
-
## ❓ General Questions
Hello,
I'm trying to build Android app with local customized (not on HuggingFace) **llava** model. I referred to those below guides:
https://llm.mlc.ai/docs/compilation…
-
Running this script:
```python
import mlx.core as mx
from mlx_vlm import load, generate
import os
from pathlib import Path
# model_path = "mlx-community/llava-1.5-7b-4bit"
#model_path = "…