-
Using #64 as reference I was able to run `LLama3.2-vision` however the output seems completely unrelated. I have not modified the prompt in anyway and am passing a multipage pdf.
Code:
```python…
-
should have a port to upload image
-
-
**Is your feature request related to a problem? Please describe.**
Llama3.2 was released, and as it has multimodal support would be great to have it in LocalAI
**Describe the solution you'd li…
-
As title says. I can upload images to llava but not llama3.2-vision
llama3.2-vision:11b-instruct-q8_0
Nayar updated
2 weeks ago
-
i saw https://huggingface.co/Vision-CAIR/LongVU_Llama3_2_1B exists .
Is it image or video part ?
could it be combined with LongVU_Llama3_2_3B ? (image or video) and what
hardware requirements ?
-
## Problem Statement
To support Vision models on Cortex, we need the following:
- [ ] 1. Download model .gguf and mmproj file
- [ ] 2. `v1/models/start` takes in `model_path` (.gguf) and `mmproj` p…
-
### What is the issue?
if i run llama3.1 (which is ok):
```
Prompt: What is three plus one?
Calling function: add_two_numbers
Arguments: {'a': 3, 'b': 1}
Function output: 4
```
but if i run …
fce2 updated
19 hours ago
-
# Bug Report
## Installation Method
docker
## Environment
[v0.3.35](https://github.com/open-webui/open-webui/releases/tag/v0.3.35)
Windows 10
Firefox 132.0.1 (64-bit)
**Confi…
-
Thank you for the llama 3.2 vision integration!
I was using llama3.2-3b with ChatOllama(model="llama3.2:latest").with_structured_output() to get a structured response from the model and I was hopin…