-
### Prerequisite
- [X] I have searched [Issues](https://github.com/open-compass/opencompass/issues/) and [Discussions](https://github.com/open-compass/opencompass/discussions) but cannot get the expe…
-
### Describe the bug
The README specifies that we can run inference on `internlm/internlm2_5-7b-chat-4bit ` with the following code:
```python
from lmdeploy import pipeline
pipe = pipeline("int…
-
I met follow error, when I merge llm-7b adapter.
But I previously successfully merged llm-20b.
-
### Checklist
- [ ] 1. I have searched related issues but cannot get the expected help.
- [ ] 2. The bug has not been fixed in the latest version.
### Describe the bug
给一个1-20万字的文本,让它总结文本内容,只要超过1万字…
-
Traceback (most recent call last):
File "InternLM-XComposer-main/test.py", line 52, in
response, _ = model.chat(tokenizer, query, image, do_sample=False, num_beams=3, use_meta=True)
File "…
hj611 updated
3 months ago
-
### 描述该错误
[ModelCloud/internlm-2.5-7b-chat-gptq-4bit](https://huggingface.co/ModelCloud/internlm-2.5-7b-chat-gptq-4bit) and my code:
```
from vllm import LLM, SamplingParams
# Sample prompts.
p…
-
```
import os
import torch
from transformers import AutoModel, AutoTokenizer
modelpath = os.getenv('MODELS')
model_name_or_path = modelpath + "/internlm-xcomposer2-vl-7b"
# os.environ["CUD…
-
目前网上有很多服务商提供兼容OpenAI接口的大模型服务,可以免费使用各种开源LLM能力,包括Qwen2、GLM4、InternLM2.5等。
请问如何修改models文件才能对接使用这些开源LLM服务?
-
非常感谢贵组的工作,能够在swift中集成lmdeploy!
然而,事实上我在使用的过程中遇到了一些问题。
我使用如下脚本进行推理:
```
CUDA_VISIBLE_DEVICES=4,5,6,7 swift infer \
--model_type internlm-xcomposer2-4khd-7b-chat \
--model_id_or_path /data…
bonre updated
2 months ago
-
### Describe the bug
The more info about error:
```
Traceback (most recent call last):
File "****/7b_sft.py", line 3, in
tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2-cha…