-
### What is the issue?
I encountered an issue when attempting to load the 'llava' AI model; however, others such as 'Llama3' or 'Phi3' have no problem. Here are the details:
```
>>ollama run llava
…
-
Hi - thanks so much for making this repo!
I just ran the benchmark on my 32GB M1 Macbook Pro and I'm getting tps numbers roughly 60% of what was reported. Any idea on what might be going on?
Se…
-
### 🐛 Describe the bug
Repro:
```python
import requests
import torch
from PIL import Image
from transformers import AutoModelForCausalLM, AutoProcessor
from ml_dtypes import bfloat16
import …
-
### **Background:**
TT-Buda, developed by Tenstorrent, is a growing collection of model demos showcasing the capabilities of AI models running on Tenstorrent hardware. These demonstrations cover a wi…
-
2 new models released from Microsoft:
https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/
https://huggingface.co/microsoft/Phi-3-small-8k-instruct/
Medium uses Phi3ForCausalLM and conv…
-
Lora+base is working good
![image](https://github.com/mbzuai-oryx/LLaVA-pp/assets/15274284/ccec0900-7db0-4729-9ab4-3c5f68e0f304)
![image](https://github.com/mbzuai-oryx/LLaVA-pp/assets/15274284/7d12…
-
Hi,
I am running the phi3.5 vision model using the below command on Apple M2 macbook:
'cargo run --release --features metal -- --port 1234 vision-plain -m microsoft/Phi-3.5-vision-instruct -a phi3v'…
-
RAG for phi 3 vision using kernel memory in place of semantic text memory.Any example for offline retrival in c#?
sorry to bother you.I am new is this arena
-
I'm trying to run the following code in kaggle with **GPUP100**
`!bash /kaggle/working/Phi3-Vision-Finetune/scripts/finetune_lora_vision.sh`
### complete error
`[2024-09-14 09:33:24,960] [INFO] …
-
Hello all,
Thank you for your great work here. I was completing some testing of the Phi 3 Vision model in mistral.rs, but it appears that the error stems from a linear layer. I have verified that t…