-
MODEL_PROVIDER: OAIClient #默认使用OpenAI格式的兼容客户端,此客户端能够适配OpenAI以及各类兼容OpenAI格式的本地模型
您好,测试过程兼容OpenAI格式的本地模型都有哪些?
**_Agently_** [Guidebook](https://github.com/Maplemx/Agently/blob/main/docs/guideboo…
-
### Checklist
- [ ] 1. I have searched related issues but cannot get the expected help.
- [ ] 2. The bug has not been fixed in the latest version.
- [ ] 3. Please note that if the bug-related issue y…
-
### Your current environment
```text
PyTorch version: 2.1.2+cu118
CUDA used to build PyTorch: 11.8
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 10.2.1 20210130 (Red Hat 10.2.1-11)
Libc …
-
Thank you for your excellent work on MultimodalOCR!
When I run the following command:
`GPUS=2 BATCH_SIZE=8 sh shell/minimonkey/minimonkey_finetune_full.sh`
I meet the following issue:
`
+ GP…
-
### Reminder
- [X] I have read the README and searched the existing issues.
### Reproduction
--flash_attn True
### Expected behavior
ValueError: InternLM2ForCausalLM does not support Flash Attent…
-
### Motivation
现在有计划支持[xtuner-llava](https://github.com/InternLM/xtuner/tree/main/xtuner/configs/llava)系列的VLM吗?比如LLaVA-InternLM2-7B (XTuner)
### Related resources
_No response_
### Additional cont…
-
### Reminder
- [X] I have read the README and searched the existing issues.
### Reproduction
+ accelerate launch --config_file /home/ma-user/work/wangshuai/LLaMA-Factory/acc_config.yaml src/train_b…
-
### Describe the question.
InternLM2 is an enhanced model based on InternLM2-Base, and its capabilities should be better in many domain. Why isn't the subsequent SFT model based on it?
-
与官方安装不同点:
1. 由于服务器无法链接git,flash_attn使用了轮子flash_attn-2.3.6+cu118torch2.0cxx11abiFALSE-cp39-cp39-linux_x86_64.whl进行安装
2. 由于transformers-4.36.2 报错无法找到“InternLM2Tokenizer” 安装了 transformers-4.37.0
代…
-
目前我使用的是zero3对interlm2-chat-20b进行全量微调,8*A100只能微调2k上文,请问200k或者比较长几十k的上文微调需要怎么设置?