-
LLaVA的MME指标是不是不太对,我自己测出来的llava-internlm2-7b是1407
![image](https://github.com/InternLM/xtuner/assets/34935911/a23460d7-2732-4e5f-8bde-76175a153c68)
![image](https://github.com/InternLM/xtuner/assets/…
-
### Prerequisite
- [X] I have searched [Issues](https://github.com/open-compass/opencompass/issues/) and [Discussions](https://github.com/open-compass/opencompass/discussions) but cannot get the expe…
-
### Motivation
Hello,
There is a client program which calls different types of opanai models, like "gpt-3.5-turbo" and "gpt-4-turbo". Now I want to use local llm instead of openai models and I *…
-
### Describe the question.
目前我使用的是xtuner的zero3对internlm2-chat-20b进行全量微调,8*A100只能微调2k上文,请问200k或者比较长几十k的上文internevo是否支持,要怎么设置?
-
### Describe the bug
When loading a model by using transformers, it seems to be an issue with English answers not having spaces
### Environment
It canbe reproduced on transformers 4.34.0 、 4.37.1 a…
-
Thank you for your excellent work on MultimodalOCR!
When I run the following command:
`GPUS=2 BATCH_SIZE=8 sh shell/minimonkey/minimonkey_finetune_full.sh`
I meet the following issue:
`
+ GP…
-
The following error on demo running:
```
Traceback (most recent call last):
File "7b.py", line 3, in
tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2-math-7b", trust_remote_c…
-
### Reminder
- [X] I have read the README and searched the existing issues.
### Reproduction
同样的模型,同样的超参,同样的数据,为什么sft结果不一致,且loss有细微差别。训练参数如下:
cuda=0,1,2,3
stage=sft
model_path=internlm2-chat-7b
…
-
ModuleNotFoundError: No module named 'ilagent'
Traceback:
File "Miniconda/envs/internlm/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 535, in _run_script
e…
-
### Describe the bug
[This commit](https://huggingface.co/internlm/internlm2-chat-7b/commit/baba19a1ae271df6fb4d1d091e95a0ff5b62fc18) added additional_special_tokens, which seems result in mismatch…