microsoft / mttl

Building modular LMs with parameter-efficient fine-tuning.
MIT License
76 stars 7 forks source link

Instruct #47

Closed shuishen112 closed 10 months ago

shuishen112 commented 1 year ago

merge the code from instruction tuning.

  1. alpaca training with Llama.
  2. zero shot evaluation (superni, mmlu, gsm, bbh, codex,tydiqa,alpaca_eval)