-
### What is the issue?
Seems like something is wrong with InternLM2.5, I can't get any meaningful out of it. (tried with 32k context)
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama vers…
-
### What happened?
I fine-tuned the **InternLM2 7b-chat** model in **LLamaFactory** using a custom dataset and **lora**, exported the safetenors model and converted it to gguf format using `convert…
-
Why is there no --int8_kv_cache option when I want to use convert_checkpoint.py to build int8_kv_cache internlm2-chat-20b model?
convert_checkpoint.py is in /TensorRT-LLM/examples/internlm2/convert…
-
### System Info
-GPU A800*8
Nvlink
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task…
-
打算下载internlm2-7b,但是下载这个模型会出错。下载llama2不会,很奇怪。
执行命令:
python data/hf_dw.py --model internlm/internlm2-7b --use_hf_transfer False
报错:
export HF_ENDPOINT= https://hf-mirror.com
/home/shaoyuantian/…
-
InternVL/internvl_chat/shell/internlm2_20b_dynamic
/internvl_chat_v1_5_internlm2_20b_dynamic_res_finetune.sh
有没有不用srun跑的版本?不是root用户无法安装srun相关的slurm-client
-
你好,目前使用脚本`./shell/internlm2_1_8b_dynamic/internvl_chat_v1_5_internlm2_1_8b_dynamic_res_finetune.sh`进行微调遇到**OOM**问题
配置是 **2 x 4090**
即使修改**BATCH_SIZE**,**PER_DEVICE_BATCH_SIZE**依旧会**OOM**
所以,请问微调…
-
Hi there, nice work on the internVL! We're really impressed by the new internvl-v1.5.
One thing we noticed is that the backing language model internlm/internlm2-chat-20b has a fast tokenizer (https…
-
如题,想问下有计划开源底座internLM2.5-7B的InternLM-XComposer-2.5模型吗
-
I tried to modify the source code to support Lora loading of the internlm2 model, load lora is fine, but inference result is not correct.
the specific modifications include:
**1. add supported_l…