-
### This issue is for a: (mark with an `x`)
```
- [ ] bug report -> please search issues before submitting
- [x] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior …
-
https://huggingface.co/Qwen/Qwen-VL-Chat/tree/main
https://huggingface.co/deepseek-ai/deepseek-vl-7b-chat
I've gotten extremely good results off of these, would be great to have them baseline in…
-
### System Info / 系統信息
CUDA Version: 12.2
Transformers:4.45.1
Python:3.10.12
操作系统:ubuntu
vllm:0.6.2
### Who can help? / 谁可以帮助到您?
_No response_
### Information / 问题信息
- [X] The official exa…
-
After installation, when I run bash finetune_lora.sh,the error:
File "/home/jinchuan/anaconda3/envs/llama3v/lib/python3.10/site-packages/transformers/models/mllama/modeling_mllama.py", line 650, in…
-
Hi,
I am running the phi3.5 vision model using the below command on Apple M2 macbook:
'cargo run --release --features metal -- --port 1234 vision-plain -m microsoft/Phi-3.5-vision-instruct -a phi3v'…
-
That model is insane for its size ....
https://huggingface.co/microsoft/Phi-3-vision-128k-instruct
-
Thanks for the conversion code for phi3-vision.
I'm making a app for concurrent requests that need continuous batching. Can I inference phi3-vision with batchsize larger than 1 ( I mean in onnx mode…
-
# ComfyUI Error Report
## Error Details
- **Node Type:** ailab_OmniGen
- **Exception Type:** ValueError
- **Exception Message:** Phi3Transformer does not support an attention implementation throug…
-
I'm trying to run the following code in kaggle with **GPUP100**
`!bash /kaggle/working/Phi3-Vision-Finetune/scripts/finetune_lora_vision.sh`
### complete error
`[2024-09-14 09:33:24,960] [INFO] …
-
### 📦 Environment
Vercel
### 📌 Version
v1.26.11
### 💻 Operating System
Windows
### 🌐 Browser
Chrome
### 🐛 Bug Description
When "Get Model List" is pressed on Github, it reports "0 models avai…