-
Repro below minified from a torchtune model, still investigating:
```
import torch
def forward(
s0: "",
s1: "",
L_x_: "",
L_self_modules_sa_norm_parameters_scale_: "",
…
-
Environmental preparation:
```
git clone https://github.com/modelscope/ms-swift.git
cd ms-swift
pip install -e .[llm]
# or
pip install git+https://github.com/modelscope/swift.git#egg=ms-sw…
-
### 🥰 需求描述
Ollama 0.4.0 支持了 llama3.2-vision 模型,可以识别图片。https://ollama.com/blog/llama3.2-vision
目前尝试了在 LobeChat v1.28.4 中调用了 llama3.2-vision 模型,发现不能正确处理图片。
从日志可以看到相关请求体:
```json
{
"message…
-
in app.py file, init meth setting model_name = "cambrian_qwen" and model_path = "./checkpoints/longvu_qwen"。 but README tell us to download models for LongVU_Qwen2_7B_img 、LongVU_Qwen2_7B or LongVU…
-
### What is the issue?
If I try to run the `llama3.2-vision` model using `ollama run llama3.2-vision` on my Arch Linux machine, I get this error:
```
Error: llama runner process has terminated: GG…
-
qwen2-vl has always been memory hungry (compared to the other vision models) and even with unsloth it still OOMs when the largest llama3.2 11b works fine.
I'm using a dataset that has high resolution…
-
I'm using ollama 0.3.12
As I read on official site, I can download LLaMa3.2-vision with
`ollama run llama3.2-vision:11b`
But when I try to run, I'm getting
`Error: pull model manifest: file doe…
-
Any configs in which `checkpoint_files` is a list of files > 4, use FormattedFiles utility to shrink the size of the file.
Example from [llama3/70B_lora](https://github.com/pytorch/torchtune/blob/…
-
I was attempting to use the `granite3.0-8b-dense` model, and it seems there is no way to put in a custom model. When I attempted use `ollama` the error came back with:
```
bee-api-1 | {"level":"err…
-
It looks like `seed` is not working when using it in `chat()`, I get no consistent responses when setting it. I also ran the same test with `ollamar` and did receive consistent results:
``` r
ollama…