-
### Checklist
- [ ] 1. I have searched related issues but cannot get the expected help.
- [ ] 2. The bug has not been fixed in the latest version.
- [ ] 3. Please note that if the bug-related issue y…
-
### 📚 The doc issue
In "Enhancing InternVL2 on COCO Caption Using LoRA Fine-Tuning" tutorial, "Next, we’ll fine-tune the InternVL2-2B model using LoRA. Execute the following command for fine-tuning:
…
-
# Trending repositories for C#
1. [**ExOK / Celeste64**](https://github.com/ExOK/Celeste64)
__A game made by the Celeste developers in a week(ish, closer to 2)__
170 star…
-
### 描述问题
File "/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py", line 2024, in generate
result = self._sample(
File "/usr/local/lib/python3.10/dist-packages/transform…
-
### Motivation
I tried to deploy InterlVL-1B by LMDeploy which said this model is not supported by turbomind so I added ‘--backend pytorch’ but finally got such kind of error:
```
(dl_venv) PS xxxx…
eeyrw updated
2 hours ago
-
### 📚 The doc issue
自己训练了InternVL v2 8B,并用lmdeploy的w8a8量化,没有报错。但在使用官方API:
self.pipe = pipeline(MODEL_PATH) //MODEL_PATH为量化好路径
response = self.pipe((instruction, image))
在pipeline构建阶段就不对:
File "/…
-
大佬好,我在https://modelscope.cn/models/chg0901/EmoLLMV3.0下载的模型
然后将app.py当中的模型设置为model = "EmoLLM_Model" 启动时遇到问题
File "", line 1204, in _gcd_import
File "", line 1176, in _find_and_load
File "", line 11…
-
### 描述该错误
[ModelCloud/internlm-2.5-7b-chat-gptq-4bit](https://huggingface.co/ModelCloud/internlm-2.5-7b-chat-gptq-4bit) and my code:
```
from vllm import LLM, SamplingParams
# Sample prompts.
p…
-
I've set up a judge with
```
lmdeploy serve api_server internlm/internlm2-chat-1_8b --server-port 23333 --model-name internlm2-chat-1_8b
```
And now when I try with evaluation
```
python run.py…
-
Hi authors,
Was trying to run InternVL-8B and InternVL-26B on 4 GPUs, but I got this,
```
File ".cache/huggingface/modules/transformers_modules/main/modeling_internlm2.py", line 656, in forwa…