-
### Describe the bug
I am running quantized internlm2-chat-20b by llama.cpp with prompt template as in [here](https://github.com/InternLM/InternLM/blob/main/chat/chat_format_zh-CN.md). Chatting is go…
gaord updated
7 months ago
-
![image](https://github.com/open-compass/VLMEvalKit/assets/55678087/14a5e92c-0d20-4c01-8ea7-9ca947fb40b3)
Excellent work:)
But I can`t fine the data_util file in the lasted version..
And I would …
-
你好,
我做了llava-internlm2-7b的pretrain训练,但回答看起来有问题,请问什么原因?
这是完整的回答:
Two young people are walking on the beach and looking at a rock that has been found for them in which they can write their names with…
-
### Checklist
- [x ] 1. I have searched related issues but cannot get the expected help.
- [ x] 2. The bug has not been fixed in the latest version.
### Describe the bug
我使用的3060显卡,显存一般系统只占用…
-
### Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [X] 2. The bug has not been fixed in the latest version.
### Describe the bug
在单卡A10上成功部署了qwen-chat-14b-4bi…
-
### Describe the question.
- internlm2-chat-20b只有37G,我看internlm2其他系列也比较小,请问开源的是量化版本吗
- 另外有点困惑的是为啥internlm2-chat-20b推理速度这么慢([api-for-open-llm](https://github.com/xusenlinzy/api-for-open-llm/tree/e39f…
-
### Describe the bug
使用demo的脚本,模型是internlm2-chat-20b。 internlm2-chat-7b是好的
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import fire
def main(model_path: st…
-
![image](https://github.com/open-compass/VLMEvalKit/assets/18352727/3cb945d2-0e9f-4c2f-bbbe-b87554c6db4b)
In the leaderboard, [LLaVA-InternLM2-20B (QLoRA)] get higher average score than Monkey-Chat, …
-
I am fintuning llava-internlm2 (but replacing clip with dinov2, see #297). I finish the pretraining phase successfully. But during the finetuning phase, xtuner will suddenly quits without any error re…
-
### Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [ ] 2. The bug has not been fixed in the latest version.
### Describe the bug
calling internvl-1.5-4bit mod…