-
命令:xtuner train internlm2_chat_7b_qlora_alpaca_e3
报错: mmengine - WARNING - WARNING: command error: 'dlopen: cannot load any more object with static TLS'!
xtuner version
- mmengine - INFO - 0.1.14
-
### Describe the question.
1. InternLM2对繁体中文的识别及生成能力是怎么样的?
2. 如果用XTuner微调,应该怎么微调增加分词表的大小,来支援繁体中文?
3. 假设不用XTuner微调,我应该要用什么工具去微调增加分词表的大小,来支援繁体中文?
-
### Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [X] 2. The bug has not been fixed in the latest version.
### Describe the bug
I promise that I will not rai…
-
ValueError: Model architectures ['InternLM2ForCausalLM'] are not supported for now. Supported architectures: ['AquilaModel', 'AquilaForCausalLM', 'BaiChuanForCausalLM', 'BaichuanForCausalLM', 'BloomFo…
-
### Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [X] 2. The bug has not been fixed in the latest version.
### Describe the bug
我用8卡V100启动Internvl2-llama3-76…
-
### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
### Describe the bug
In my A40/T4 card environment…
-
```shell
ubuntu22.04
安装步骤
conda create --name xtuner-env python=3.10 -y
conda activate xtuner-env
pip install -U 'xtuner[deepspeed]' -i https://pypi.tuna.tsinghua.edu.cn/simple/
运行
xtuner t…
-
模型训练loss:
训练脚本:NPROC_PER_NODE=8 xtuner train llava_internlm2_chat_7b_qlora_clip_vit_large_p14_336_lora_e1_gpu8_finetune --deepspeed deepspeed_zero2
模型convert问题:
模型转换OOM:
NPROC_PER_NODE…
-
### Checklist
- [ ] 1. I have searched related issues but cannot get the expected help.
- [ ] 2. The bug has not been fixed in the latest version.
### Describe the bug
`
(my-env) C:\Users\a9092\Do…
-
xinference 0.8.3.1,显卡 T4 16GB;
internlm2-7b,4bit运行
问:介绍一下自己
模型回答:
我是一个名叫Intern LM的人工智能助手,由AI医学工程师设计和创建。我是一台高性能计算机器学习专家模型系列的产品之一(PM3)的版本4的表现形式;我可以为您提供一般知识问题的回答并提供有趣的故事和小资料的帮助服务来帮助您了解世界和自己生活的环境中的细节和…