FSoft-AI4Code / CodeMMLU

🚀 CodeMMLU Evaluator: A framework for evaluating LM models on CodeMMLU MCQs benchmark.
https://fsoft-ai4code.github.io/codemmlu/
MIT License
14 stars 1 forks source link

CodeMMLU: A Multi-Task Benchmark for Assessing Code Understanding Capabilities

CodeMMLU

📰 News🚀 Quick Start📋 Evaluation📌 Citation

📌 About

CodeMMLU

CodeMMLU is a comprehensive benchmark designed to evaluate the capabilities of large language models (LLMs) in coding and software knowledge. It builds upon the structure of multiple-choice question answering (MCQA) to cover a wide range of programming tasks and domains, including code generation, defect detection, software engineering principles, and much more.

Why CodeMMLU?

📰 News

[2024-10-13] We are releasing CodeMMLU benchmark v0.0.1 and preprint report HERE!

🚀 Quick Start

Install CodeMMLU and setup dependencies via pip:

pip install codemmlu

Generate response for CodeMMLU MCQs benchmark:

code_mmlu --model_name <your_model_name_or_path> \
  --subset <subset> \
  --backend <backend> \
  --output_dir <your_output_dir>

📋 Evaluation

Build codemmlu from source:

git clone https://github.com/Fsoft-AI4Code/CodeMMLU.git
cd CodeMMLU
pip install -e .

[!Note]

If you prefer vllm backend, we highly recommend you install vllm from official project before install codemmlu.

Generating with CodeMMLU questions:

code_mmlu --model_name <your_model_name_or_path> \
  --peft_model <your_peft_model_name_or_path> \
  --subset all \
  --batch_size 16 \
  --backend [vllm|hf] \
  --max_new_tokens 1024 \
  --temperature 0.0 \
  --output_dir <your_output_dir> \
  --instruction_prefix <special_prefix> \
  --assistant_prefix <special_prefix> \
  --cache_dir <your_cache_dir>
⏬ API Usage :: click to expand ::
```bash usage: codemmlu [-h] [-V] [--subset SUBSET] [--batch_size BATCH_SIZE] [--instruction_prefix INSTRUCTION_PREFIX] [--assistant_prefix ASSISTANT_PREFIX] [--output_dir OUTPUT_DIR] [--model_name MODEL_NAME] [--peft_model PEFT_MODEL] [--backend BACKEND] [--max_new_tokens MAX_NEW_TOKENS] [--temperature TEMPERATURE] [--cache_dir CACHE_DIR] ==================== CodeMMLU ==================== optional arguments: -h, --help show this help message and exit -V, --version Get version --subset SUBSET Select evaluate subset --batch_size BATCH_SIZE --instruction_prefix INSTRUCTION_PREFIX --assistant_prefix ASSISTANT_PREFIX --output_dir OUTPUT_DIR Save generation and result path --model_name MODEL_NAME Local path or Huggingface Hub link to load model --peft_model PEFT_MODEL Lora config --backend BACKEND LLM generation backend (default: hf) --max_new_tokens MAX_NEW_TOKENS Number of max new tokens --temperature TEMPERATURE --cache_dir CACHE_DIR Cache for save model download checkpoint and dataset ```

List of supported backends:

Backend DecoderModel LoRA
Transformers (hf)
Vllm (vllm)

Leaderboard

To evaluate your model and submit your results to the leaderboard, please follow the instruction in data/README.md.

📌 Citation

If you find this repository useful, please consider citing our paper:

@article{nguyen2024codemmlu,
  title={CodeMMLU: A Multi-Task Benchmark for Assessing Code Understanding Capabilities},
  author={Nguyen, Dung Manh and Phan, Thang Chau and Le, Nam Hai and Doan, Thong T. and Nguyen, Nam V. and Pham, Quang and Bui, Nghi D. Q.},
  journal={arXiv preprint},
  year={2024}
}