-
Running this script:
```python
import mlx.core as mx
from mlx_vlm import load, generate
import os
from pathlib import Path
# model_path = "mlx-community/llava-1.5-7b-4bit"
#model_path = "…
-
Support for Phi 3 is almost complete. It seems like this library is just missing Phi 3 Vision. Given that every other Phi 3 LLM is supported and there is multimodal support already for LLaVa, it seems…
-
2 new models released from Microsoft:
https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/
https://huggingface.co/microsoft/Phi-3-small-8k-instruct/
Medium uses Phi3ForCausalLM and conv…
-
I use GPT-4o is running ok.
But when I changed to the local model, I used some error message.
EXCEPTION: 'function' object has no attribute 'name'
![image](https://github.com/onuratakan/gpt-compute…
-
Seems that microsoft/Phi-3.5-vision-instruct not working with below config
```
torchrun --nproc_per_node=1 \
src/training/train.py \
--lora_enable True \
--vision_lora True \
-…
-
Getting this message:
File "/anaconda3/lib/python3.11/site-packages/transformers/processing_utils.py", line 926, in apply_chat_template
raise ValueError(
ValueError: No chat template is set f…
-
### First Check
- [X] This is not a feature request.
- [X] I added a very descriptive title to this issue (title field is above this).
- [X] I used the GitHub search to find a similar issue and didn'…
-
Would like to be able to run this with local llm stacks like litellm or ollama etc.
Could you provide a parameter to specify llm and base url
-
```
llava_name_or_path='hub/llava-phi-3-mini-xtuner'
model = dict(
type=LLaVAModel,
freeze_llm=True,
freeze_visual_encoder=True,
pretrained_pth=llava_name_or_path,
llm=dict(…
-
Lora+base is working good
![image](https://github.com/mbzuai-oryx/LLaVA-pp/assets/15274284/ccec0900-7db0-4729-9ab4-3c5f68e0f304)
![image](https://github.com/mbzuai-oryx/LLaVA-pp/assets/15274284/7d12…