-
### Motivation
internlm2_5-20b-chat支持
### Related resources
_No response_
### Additional context
_No response_
-
- 请问眼科专业书籍是作为增量预训练的数据喂进去的吗?
- 可否分享下训练数据的制作经验,感谢~(感觉人工的话工作量是不是很大)
谢谢~
-
### What happened?
I fine-tuned the **InternLM2 7b-chat** model in **LLamaFactory** using a custom dataset and **lora**, exported the safetenors model and converted it to gguf format using `convert…
-
Hi, did u first train the projector, and then train projector + LLM, what's the detail of them.
-
### The model to consider.
Thanks to the efforts of the vllm team.
Recently, I am preparing to optimize the inference performance of WeMM, with the link provided below.
https://huggingface.co/f…
-
### Motivation
There is a missing for InternVL2-1xB models, and is InternVL2-1xB in the plan, e.g., combiming [InternViT‑6B‑448px‑V1‑5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) and […
-
error log:
Generating train split: 3457 examples [00:00, 14292.20 examples/s]
Map (num_proc=32): 0%| | 0/3457 [00:00
-
### System Info
-GPU A800*8
Nvlink
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task…
-
运行web_internlm2_5.py之后报错ModuleNotFoundError: No module named 'transformers_modules.EmoLLM_V3',感谢大佬解答
-
==========
== CUDA ==
==========
CUDA Version 12.4.1
Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents a…