microsoft / mttl

Building modular LMs with parameter-efficient fine-tuning.
MIT License
73 stars 7 forks source link

May I get the trained LoRA lib to reproduce the paper:Modular LLM? #72

Closed Weifan1226 closed 1 month ago

Weifan1226 commented 1 month ago

Hi Team!

I came across your excellent work on Modular LLM and would like to build upon it. Could you please provide the trained LoRA library used in your research to help me reproduce the results?

Thank you!

shuishen112 commented 1 month ago

Thanks for your message. you can reproduce the results in the paper using the private library:

  1. uniform routing

python eval_library.py -k library_id=hf://zhan1993/private_library_phi2_icml merge_or_route=uniform predict_batch_size=4

  1. arrow routing:

python eval_library.py -k library_id=hf://zhan1993/private_library_phi2_icml merge_or_route=arrow predict_batch_size=4

We also release the private lib based on Phi-3:

python eval_library.py -k library_id=hf://zhan1993/private_library_phi3-4k merge_or_route=uniform predict_batch_size=4

Weifan1226 commented 1 month ago

Hi Team,

Thank you so much for you help! I truly appreciate your support!

Best regards, Wei_fan

sordonia commented 1 month ago

@shuishen112 thanks! closing the issue :)