-
Hi authors,
Was trying to run InternVL-8B and InternVL-26B on 4 GPUs, but I got this,
```
File ".cache/huggingface/modules/transformers_modules/main/modeling_internlm2.py", line 656, in forwa…
-
I updated Ollama from 0.1.16 to 0.1.18 and encountered the issue.
I am using python to use LLM models with Ollama and Langchain on Linux server(4 x A100 GPU).
There are 5,000 prompts to ask and get…
-
### 是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
### 该问题是否在FAQ中有解答? | Is there an existing ans…
-
Hi,
I think it would be cool if ollama run without any extra arguments showed the models in ollama list, but with a number next to them.
Ie ollama run ->
```sh
TYPE NUMBER OF MODEL TO RUN…
-
Tested and it has very good quality for the size:
https://huggingface.co/openbmb/MiniCPM-V-2
-
1. Clone Repositories and Related Packages
1.1 image-textualization
git clone https://github.com/sterzhang/image-textualization.git
cd image-textualization
conda create --name image-textualization…
-
26B的这个模型在多卡加载的情况下看上去会有点问题,GPU0的显存占用会比GPU1高很多,这个应该是模型的分片有点问题
多张图片第一张卡显卡OOM
-
### 训练脚本
#!/bin/bash
GPUS_PER_NODE=2
NNODES=1
NODE_RANK=0
MASTER_ADDR=localhost
MASTER_PORT=6001
MODEL="/root/MiniCPM-V/pretrained_weights/MiniCPM-V-2_6" # or openbmb/MiniCPM-V-2, openbmb/…
-
Hi, I am trying to finetune LLaVA-NeXT with my custom dataset, using "finetune_clip.sh" shell file.
I gave some edits to the shell for my convenience and to satisfy my task so far, like this:
```
…
-
**What problem or use case are you trying to solve?**
Context: https://opendevin.slack.com/archives/C06P5NCGSFP/p1719073107473339
It will be very helpful for the agent to actually "see," especia…