TencentARC / LLaMA-Pro

[ACL 2024] Progressive LLaMA with Block Expansion.
https://tencentarc.github.io/LLaMA-Pro/
Apache License 2.0
449 stars 34 forks source link

Code for training llama pro? #2

Open yhyu13 opened 6 months ago

yhyu13 commented 6 months ago

Hi,

If I get it correctly, you have used code from https://github.com/allenai/open-instruct as base.

Would you release the full code of reproducing llama2 pro 8B?

Thanks!

hills-code commented 6 months ago

Yes, of course. I will organize the code recently. Thanks for your interest.

raghavgarg97 commented 6 months ago

Hey @hills-code ,could you also add code for converting a model by adding identity blocks for training ? I am excited to use similar techniques for other open-source models like Qwen!! Thanks!:)

hills-code commented 6 months ago

Hey @hills-code ,could you also add code for converting a model by adding identity blocks for training ? I am excited to use similar techniques for other open-source models like Qwen!! Thanks!:)

I have added the block expansion script under the folder scripts. You can check it for reference. Hope it will be helpful!

raghavgarg97 commented 6 months ago

i will check that out,Thanks!

raghavgarg97 commented 6 months ago

@hills-code i was able to get it to work! Btw you only train the added layers and not even the lm head? and also do you think directly going for SFT by skipping pretraining will work?

yhyu13 commented 6 months ago

@raghavgarg97 No, I don't think you can skip the "post-training" for extra blocks on some new corpus (e.g. bigcode, as mentioned in the paper) before applying tuning bc the purpose of LLaMA Pro is sort of "add new ability w/o forgetting by introducing more layers". The new ability gained is training new layers on new corpus.

Abolfazl-kr commented 6 months ago

does anybody know how to continue pre-training LLaMA Pro? @yhyu13 @raghavgarg97 @yxgeee @hills-code

yhyu13 commented 6 months ago

nah,code has not been released yet