-
运行代码是教程里的demo代码:
`python run.py --datasets ceval_gen --hf-path /share/temp/model_repos/internlm-chat-7b/ --tokenizer-path /share/temp/model_repos/internlm-chat-7b/ --tokenizer-kwargs padding_side='le…
Cyydz updated
2 months ago
-
Thank you very much for your outstanding work!When trying to use the internlm model, I found that the features obtained by vLLM forward for the first time are different from those obtained by HF for t…
-
**Is your feature request related to a problem? Please describe.**
No.
**Describe the solution you'd like**
Please support `internlm/internlm-xcomposer2-vl-7b-4bit` model. They have already provi…
-
Thank you for the favorable work! [Inference on Multiple GPUs](https://github.com/InternLM/InternLM-XComposer?tab=readme-ov-file#inference-on-multiple-gpus) in README calls [example_chat.py](https://g…
-
如题,想问下有计划开源底座internLM2.5-7B的InternLM-XComposer-2.5模型吗
-
我看到支持的基模有InternLM2,
最近InternLM已经发布了InternLM2.5-7b,请问是否支持呢?
-
Traceback (most recent call last):
File "/opt/tiger/internlm-xcomposer/finetune/finetune.py", line 311, in
train()
File "/opt/tiger/internlm-xcomposer/finetune/finetune.py", line 242, in t…
-
请问如果我想使用旧版本的“internlm-xcomposer-7b”进行两张3090进行微调,应该如何修改代码。我发现最新的多卡运行代码无法适用旧版本的模型
-
环境配置都是按照教程来的。
然后自定义的数据集为:
{"conversation": [{"system": "你是一位旅游路线规划方向上,关键词提取的高手,你能够精确的获取到句子中的关键词,从而能够保证在网络上能够搜索到非常准确且良好的内容以供回复", "input": "我想要计划一个澳大利亚深度旅游,有哪些景点值得推荐?", "output": "澳大利亚, 深度旅游, 景点推荐"}…
-
- [x] MiniCPM-Llama3-V-2_5
- [x] Florence 2
- [x] Phi-3-vision
- [x] Bunny
- [x] Dolphi-vision-72b
- [x] Llava Next
- [x] Qwen2-VL
- [x] Pixtral
- [x] Llama-3.2
- [x] Llava Interleave
- [ ] …