princeton-nlp / LLM-Shearing

[ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning
https://arxiv.org/abs/2310.06694
MIT License
563 stars 47 forks source link

Please share the alpaca generate and eval code and script to reproduce the results shared in #26

Closed sanyalsunny111 closed 1 year ago

sanyalsunny111 commented 1 year ago

Hey @xiamengzhou , great work. I am trying to reproduce your results. Could you please share the alpaca generation code and running scripts?

Thanks in advance.

xiamengzhou commented 1 year ago

Hi! We used SharedGPT to finetune the base models we released. Do you mean the generation protocol for SharedGPT?

sanyalsunny111 commented 1 year ago

Hi! We used SharedGPT to finetune the base models we released. Do you mean the generation protocol for SharedGPT?

yes generation protocol for shareGPT/alpaca eval.

xiamengzhou commented 1 year ago

Hi I just updated the code for instruction tuning in this folder: https://github.com/princeton-nlp/LLM-Shearing/tree/main/instruction_tuning

Sorry that it's not very well documented, but I think it should be pretty straightforward!

sanyalsunny111 commented 1 year ago

Very grateful for your help. Thank you again.