-
We should move our test and examples using llama3.1 to llama3.2.
Ollama already supports 1B and 3B versions, while 11B and 90B will be available very soon.
GROQ supports all the models in preview mo…
-
### What is the issue?
1. Updated Ollama this morning.
2. Entered `ollama run x/llama3.2-vision` on macbook
3. Got below output:
> pulling manifest
> pulling 652e85aa1e14... 100% ▕██████████…
-
### System Info
CUDA Version: 12.4
GPU: A6000
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### 🐛 Describe the bug
After finetuning Llama3.2 visio…
-
### What is the issue?
ollama run x/llama3.2-vision:latest "describe this image: /home/papillon/Downloads/objectdetection.jpg"
Added image '/home/papillon/Downloads/objectdetection.jpg'
Error: POST…
uzhao updated
3 weeks ago
-
### What is the issue?
Directly feeding a PNG image does not work (`failed to decode image: image: unknown format`), so until recently, I was using the code bellow in order to encode the image file…
-
With all the growing activity and focus on multimodal models is this library restricted to tune text only LLM?
Do we plan to have Vision or more in general multimodal models tuning support?
bhack updated
2 weeks ago
-
Hello everyone
I'm trying to check working with images by llama 3.2 vision. With code from example:
```
image_path = 'myimage.jpg'
#img = base64.b64encode(pathlib.Path(image_path).read_by…
-
### Describe the bug
there is a little bit of technical knowledge required to install open interpreter.
because im basic i know others are also basic so heres what to do:
install python from …
-
Hi,
I've been playing around with using ollama to generate OCR content and was wondering if you were planning on adding the ability to use vision LLM as OCR content for paperless.
I am following…
-
The current implementation of local means no sharding/tensor parallelism, etc, and refuses to work on my dual 4090 setup. How do I enable multi gpu, or how do I enable a proper system like VLLM to run…