issues
search
microsoft
/
mttl
Building modular LMs with parameter-efficient fine-tuning.
MIT License
76
stars
7
forks
source link
Instruct
#47
Closed
shuishen112
closed
10 months ago
shuishen112
commented
1 year ago
merge the code from instruction tuning.
alpaca training with Llama.
zero shot evaluation (superni, mmlu, gsm, bbh, codex,tydiqa,alpaca_eval)
merge the code from instruction tuning.