-
Promising model , have a look whenever you are free.
internlm/internlm2_5-7b-chat
-
比如 baichuan-7b-v1 目前是限时免费的
{
"models": [
"qwen-long",
"qwen-turbo",
"qwen-plus",
"qwen-max",
…
-
Firstly,thank you for the incredible results of this work.
I wrote the following inference code referring to the official document load the model after training, but it reported an error TypeError:…
-
python others/test_diff_vlm/InternLM_XComposer.py
Set max length to 16384
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████…
-
llm = LMDeployServer(path='internlm/internlm2_5-7b-chat',
model_name='internlm2',
meta_template=INTERNLM2_META,
top_p=0.8,
…
-
Thank you for the favorable work! [Inference on Multiple GPUs](https://github.com/InternLM/InternLM-XComposer?tab=readme-ov-file#inference-on-multiple-gpus) in README calls [example_chat.py](https://g…
-
运行代码是教程里的demo代码:
`python run.py --datasets ceval_gen --hf-path /share/temp/model_repos/internlm-chat-7b/ --tokenizer-path /share/temp/model_repos/internlm-chat-7b/ --tokenizer-kwargs padding_side='le…
Cyydz updated
1 month ago
-
### Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [X] 2. The bug has not been fixed in the latest version.
- [X] 3. Please note that if the bug-related issue y…
-
### Model introduction
This is a general-purpose 7B LLM. It has a 1M token context window, and is the best-performing sub-12B model on Open LLM Leaderboard, so it might be a good base for further tra…
-
全新git获取的MindSearch和lagent仓库(之所以不直接安装是因为需要修改lagent部分代码),其余正常pip安装。
修改了terminal.py的15行模型为本地模型internlm2-chat-20b-4bit(使用lmdeploy对internlm2-chat-20b进行量化后的模型)
在运行mindsearch/terminal.py的时候,就出现报错,不影响最后出结果:…