AGI-Edgerunners / LLM-Adapters

Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"
https://arxiv.org/abs/2304.01933
Apache License 2.0
1.08k stars 103 forks source link

Upload evaluation outputs and adapters #46

Open mkeoliya opened 1 year ago

mkeoliya commented 1 year ago

It would be great if you could provide the individual outputs when running the models on the test sets. Additionally, is it possible to provide links to all the model adapters used (currently the README only includes llama-13b).

Perhaps a GDrive or Zenodo link would work well.

This would enable quicker turn-around times when comparing different adapters. Thanks a lot for the work so far!

HZQ950419 commented 1 year ago

Hi,

Thanks for your interest in our project! We have uploaded the outputs of LLaMA-7B and LLaMA-13B with different adapters on both math reasoning and commonsense reasoning tasks. You can find the outputs here, https://drive.google.com/drive/folders/1weL4Cq1h6M5lOhNL9Hran167D1dqtOZk?usp=sharing. The results are consistent with the reported ones. But we still need time to collate the adapter weights.

Please let us know if you have further questions.

mkeoliya commented 1 year ago

Thanks a lot!

mkeoliya commented 12 months ago

@HZQ950419 can you upload the adapter weights too?