-
Hello,
Thanks for your time and effort on this node!
I noticed that Microsoft quietly dropped the model for Kosmos 2.5 yesterday here: https://github.com/microsoft/unilm/tree/master/kosmos-2.5
…
CCpt5 updated
4 months ago
-
Hi, I've been exploring this repo for the past couple of days and I find your work here really amazing. I'm curious if there are any plans to add support for the Phi-3-vision-128k-instruct model to th…
-
### What is the issue?
moondream model and other vision models like phi3 llava doesn’t return any text most of the time. Running Ollama 0.1.33-pre5 and ollama-rs. Not an issue with ollama-rs because …
-
### What is the issue?
Hi team im getting this error below -
C:\Windows\System32>ollama run gemma
pulling manifest
Error: Head "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ol…
-
After start pretrain, there is a bug
Traceback (most recent call last):
File "/data2/LLaVA-pp/LLaVA/llava/train/train_mem.py", line 4, in
train(attn_implementation="flash_attention_2")
…
-
### Your current environment
I encountered a few issues while running phi-3-vision with the vllm built from current main branch.
1. Dependency:
`torchvision` is a dependency under [image_pro…
-
Hello,
It's a great work! And there are several questions:
1. In the technical report you mentioned
> We find that LoRA empirically leads to better performance than fully tuning across all c…
-
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Using conversation format: phi3
Special tokens have been added in the vocabulary…
-
## 🚀 Feature
Introduce phi 3 mini 128k instruct
https://huggingface.co/microsoft/Phi-3-mini-128k-instruct
Mini can run on phones (there is a 4k version and 128k version)
## Motivation
…
-
Please integrate phi 3 with llava as it is equivalent to llama 3 on benchmarks