-
Running the server (using the vLLM CLI or our [docker image](https://docs.vllm.ai/en/latest/serving/deploying_with_docker.html)):
* `vllm serve meta-llama/Llama-3.2-11B-Vision-Instruct --enforce-eage…
-
### 🥰 需求描述
Ollama 0.4.0 支持了 llama3.2-vision 模型,可以识别图片。https://ollama.com/blog/llama3.2-vision
目前尝试了在 LobeChat v1.28.4 中调用了 llama3.2-vision 模型,发现不能正确处理图片。
从日志可以看到相关请求体:
```json
{
"message…
-
### What is the issue?
If I try to run the `llama3.2-vision` model using `ollama run llama3.2-vision` on my Arch Linux machine, I get this error:
```
Error: llama runner process has terminated: GG…
-
Environmental preparation:
```
git clone https://github.com/modelscope/ms-swift.git
cd ms-swift
pip install -e .[llm]
# or
pip install git+https://github.com/modelscope/swift.git#egg=ms-sw…
-
I'm using ollama 0.3.12
As I read on official site, I can download LLaMa3.2-vision with
`ollama run llama3.2-vision:11b`
But when I try to run, I'm getting
`Error: pull model manifest: file doe…
-
I tried to load Lora training adapters from Deepspeed checkpoint:
dir:
```
ls Bunny/checkpoints-llama3-8b/bunny-lora-llama3-8b-attempt2/checkpoint-6000
total 696M
-rw-r--r-- 1 schwan46494@gmail.c…
-
### Requirements
- [X] I have searched the issues of this repository and believe that this is not a duplicate
- [X] I have confirmed this bug exists on the latest version of the app
### Platform
Wi…
-
I am trying to finetune llama3.2 Vision Instruct, and I am using the distributed recipe and example (lora) config as a starting point. Eventually, I am looking to use a custom dataset, but first, I am…
-
I get more than 80,000 words when I use the Ollama Vision node, unbelievable!!! The Ollama model I use is llama3.2-vision:11b, I am not sure if that model's problem or others.
This is quite likely to…
-
Hi, thanks for this amazing project. I was trying to finetune the lora model for Llama3.2 Vision which works fine and saved a adapter_0.pt; Then I wanted to use this adapter checkpoint for inference i…