AGI-Edgerunners / LLM-Adapters

Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"
https://arxiv.org/abs/2304.01933
Apache License 2.0
1.08k stars 103 forks source link

Questions about the accuracy of eight commonsense reasoning datasets vs the Llama paper #70

Open Yonghao-Tan opened 2 months ago

Yonghao-Tan commented 2 months ago

Hi, thanks for the useful code for us! I have questions about the accuracy of commonsense reasoning tasks. In the readme, the accuracy of Llama (for example) is image While the Llama2 paper is image Some tasks have lower accuracy after fine-tuning, like 76.5 -> 68.9 for BoolQ. Could you kindly explain this to me? Thanks a lot!

zjtco-yr commented 1 month ago

same question

YananLi18 commented 4 weeks ago

Given the MMLU performance referenced in the Llama2 paper, I believe the results in Table 20 reflect a 5-shot scenario, while LLM-Adapters' performance is primarily zero-shot.