-
Hello unsloth team,
I'm trying to use the InternLM2.5 model (specifically internlm/internlm2_5-7b-chat) with unsloth, but I'm encountering a NotImplementedError. Could you please add support for th…
-
### What is the issue?
Seems like something is wrong with InternLM2.5, I can't get any meaningful out of it. (tried with 32k context)
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama vers…
-
Why is there no --int8_kv_cache option when I want to use convert_checkpoint.py to build int8_kv_cache internlm2-chat-20b model?
convert_checkpoint.py is in /TensorRT-LLM/examples/internlm2/convert…
-
打算下载internlm2-7b,但是下载这个模型会出错。下载llama2不会,很奇怪。
执行命令:
python data/hf_dw.py --model internlm/internlm2-7b --use_hf_transfer False
报错:
export HF_ENDPOINT= https://hf-mirror.com
/home/shaoyuantian/…
-
### System Info
-GPU A800*8
Nvlink
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task…
-
InternVL/internvl_chat/shell/internlm2_20b_dynamic
/internvl_chat_v1_5_internlm2_20b_dynamic_res_finetune.sh
有没有不用srun跑的版本?不是root用户无法安装srun相关的slurm-client
-
Hi there, nice work on the internVL! We're really impressed by the new internvl-v1.5.
One thing we noticed is that the backing language model internlm/internlm2-chat-20b has a fast tokenizer (https…
-
### 描述该错误
'''
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '6'
tokenizer = AutoTokenizer.from_pretrained("/nvme/models…
-
### Describe the feature
Hi InternLM Team,
Thanks for your great work and the powerful InternLM2.5 models. I'm currently conducting research on efficient long-context LLMs inference, [MInference](…
-
这是我修改的微调config脚本文件:
# Copyright (c) OpenMMLab. All rights reserved.
import torch
from datasets import load_dataset
from mmengine.dataset import DefaultSampler
from mmengine.hooks import (Checkpoi…