-
This is the best open source vision model i have ever tried , We need support for it in ollama
-
### What is the issue?
I encountered an issue when attempting to load the 'llava' AI model; however, others such as 'Llama3' or 'Phi3' have no problem. Here are the details:
```
>>ollama run llava
…
-
Dear @dusty-nv ,
I pulled dustynv/ollama:r36.2.0 on jeston orin 32G DEV.
run command: jetson-containers run --name ollama $(autotag ollama), the output are:
[GIN-debug] [WARNING] Running in "debug…
-
Thanks for sharing this interesting work.
I was wondering how do you do the inference on text-only tasks such as MMLU? Do you just use Llama3?
If so, this work actually keeps two models, one is Llam…
-
### Describe the issue
Issue: When loading weights for llava-v1.6-34b, it says model parameter mismatch.
Command:
```
model_path = "liuhaotian/llava-v1.6-34b"
with warnings.catch_warnin…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [X] I am running the latest code. Development is very rapid so there are no tagged versions as of…
-
Hi 👋🏻 Do you have any inference examples that I could use?
-
## Environment
- Platform: Debian Linux
- GPU: A100
- Torch: '2.1.2+cu121'
- Transfomers: '4.37.2'
## Issue
I'm seeing random and sudden loss spikes during training, if there is a simpler wa…
-
my computer have 5 ollama models, but in the "ollama generate "node just find one, which is not in my models.
how can i find the "ollama generate "node that put the models path
please help.
"oll…
-
### Question
I was trying to run LLava inference on cpu, but it complains "Torch not compiled with CUDA enabled". I noticed that cuda() is called when loading model. If I remove all the cuda() invoc…