Hi, thanks for the useful code for us! I have questions about the accuracy of commonsense reasoning tasks. In the readme, the accuracy of Llama (for example) is
While the Llama2 paper is
Some tasks have lower accuracy after fine-tuning, like 76.5 -> 68.9 for BoolQ. Could you kindly explain this to me? Thanks a lot!
Given the MMLU performance referenced in the Llama2 paper, I believe the results in Table 20 reflect a 5-shot scenario, while LLM-Adapters' performance is primarily zero-shot.
Hi, thanks for the useful code for us! I have questions about the accuracy of commonsense reasoning tasks. In the readme, the accuracy of Llama (for example) is While the Llama2 paper is Some tasks have lower accuracy after fine-tuning, like 76.5 -> 68.9 for BoolQ. Could you kindly explain this to me? Thanks a lot!