QwenLM / Qwen

The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
Apache License 2.0
13.59k stars 1.11k forks source link

[HELP] I wonder how the MMLU result is evaluated? #1189

Closed YuMeng2v closed 4 months ago

YuMeng2v commented 6 months ago

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

当前行为 | Current Behavior

The test result of Qwen-7B, Qwen-0.5B is lower than reported. I wonder if you test the result after training with the training set of MMLU?

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

No response

hzhwcmhf commented 6 months ago

All evaluation codes are here: https://github.com/QwenLM/Qwen/tree/main/eval

Please confirm whether the result you want to reproduce was generated by the base model or the chat model. If it is from the base model, please use the corresponding evaluation script (evaluate_mmlu.py) and model (e.g., Qwen1.5-7B not Qwen1.5-7B-Chat).

github-actions[bot] commented 5 months ago

This issue has been automatically marked as inactive due to lack of recent activity. Should you believe it remains unresolved and warrants attention, kindly leave a comment on this thread. 此问题由于长期未有新进展而被系统自动标记为不活跃。如果您认为它仍有待解决,请在此帖下方留言以补充信息。

zjintheroom commented 3 months ago

All evaluation codes are here: https://github.com/QwenLM/Qwen/tree/main/eval

Please confirm whether the result you want to reproduce was generated by the base model or the chat model. If it is from the base model, please use the corresponding evaluation script (evaluate_mmlu.py) and model (e.g., Qwen1.5-7B not Qwen1.5-7B-Chat).

你好,想问一下,这部分代码的评测适用于qwen2的模型么