-
### Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
### Describe the bug
I install lmdeploy using pip.…
-
```shell
(xtuner) ➜ xtuner git:(main) python xtuner/tools/train.py xtuner/configs/internlm/internlm_chat_7b/internlm_chat_7b_qlora_arxiv_gentitle_e3.py
08/31 00:40:13 - mmengine - INFO -
--------…
-
-
用自定义的单轮数据训练Qwen-7B,colab(配置T4,15GB内存)报错CUDA OOM。
1. 案例中,InternLM-7B可以正常微调,同样7B的模型为什么Qwen无法实现呢,请问有什么方法能降低显存需要吗?
模型配置如下
model = dict(
type=SupervisedFinetune,
llm=dict(
type=Auto…
-
### Motivation
看到turbomind框架支持了QWen-7b-chat,想问下这个模型怎么转换呢,具体命令是啥啊,我这边使用
python3 -m lmdeploy.serve.turbomind.deploy qwen-7b ../pretrain_models/Qwen-7B-chat/ qwen --tp 1 -d ./workspace-qwen-7b-chat-f…
-
docker容器里,按照如下方式安装了xtuner
git clone https://github.com/InternLM/xtuner.git
cd xtuner
pip install -e '.[all]'
其它依赖环境都以配置好了,
在xtuner安装路径下,执行命令 xtuner list-cfg
提示错误:
bash: xtuner: command not foun…
-
Hi, I tested LMDeploy with the following steps,
- 1. Get models from https://huggingface.co/internlm/internlm-chat-7b/
- 2. Convert to triton models `python -m lmdeploy.serve.turbomind.deploy interl…
-
### Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [X] 2. The bug has not been fixed in the latest version.
### Describe the bug
![image](https://github.com/I…
-
**功能描述 / Feature Description**
项目介绍的不详细,跪求路过的老哥/前辈帮梳理一下思路。
**解决的问题 / Problem Solved**
需要运行的程序是这三个:python server/llm_api.py python server/api.py streamlit run webui.py
但是这几个…
-
### Describe the bug
The result I get is listed below.
```
dataset version metric mode internlm-chat-7b-hf
--------------------------- --------- ------------- …