-
llm = LMDeployServer(path='internlm/internlm2_5-7b-chat',
model_name='internlm2',
meta_template=INTERNLM2_META,
top_p=0.8,
…
-
Please help to impl internlm-xcomposer2-vl-7b serving support on lightweight serving or some other frameworks.
-
比如 baichuan-7b-v1 目前是限时免费的
{
"models": [
"qwen-long",
"qwen-turbo",
"qwen-plus",
"qwen-max",
…
-
微调internlm-xcomposer2d5-7b时,损失一直为0
yimuu updated
2 months ago
-
It is a great job! I wonder the reason using InternLM-7B instead of LLama based model? You use the InternLM-7B or InternLM-chat-7B ?
And for the training data, you only use the instruction tuning d…
-
-
使用示例代码进行推理,4bit的在回答的时候单张图像最大达到了38G的显存占用,是否是正常的。无量化的模型版本直接报错了。
from lmdeploy import TurbomindEngineConfig, pipeline
from lmdeploy.vl import load_image
engine_config = TurbomindEngineConfig(model_for…
-
Firstly,thank you for the incredible results of this work.
I wrote the following inference code referring to the official document load the model after training, but it reported an error TypeError:…
-
Hi, thanks for the great work here! I have 2 questions:
1. When will you provide fine-tune scripts for InternLM-XComposer2-4KHD-7B?
2. What is the GPU requirement for fine-tuning InternLM-XCompo…
-
请问一下 InternLM-XComposer2-4KHD-7B 如何多卡推理,单卡 A100 40G 显存提示不够