Open yhyu13 opened 10 months ago
Yes, of course. I will organize the code recently. Thanks for your interest.
Hey @hills-code ,could you also add code for converting a model by adding identity blocks for training ? I am excited to use similar techniques for other open-source models like Qwen!! Thanks!:)
Hey @hills-code ,could you also add code for converting a model by adding identity blocks for training ? I am excited to use similar techniques for other open-source models like Qwen!! Thanks!:)
I have added the block expansion script under the folder scripts. You can check it for reference. Hope it will be helpful!
i will check that out,Thanks!
@hills-code i was able to get it to work! Btw you only train the added layers and not even the lm head? and also do you think directly going for SFT by skipping pretraining will work?
@raghavgarg97 No, I don't think you can skip the "post-training" for extra blocks on some new corpus (e.g. bigcode, as mentioned in the paper) before applying tuning bc the purpose of LLaMA Pro is sort of "add new ability w/o forgetting by introducing more layers". The new ability gained is training new layers on new corpus.
does anybody know how to continue pre-training LLaMA Pro? @yhyu13 @raghavgarg97 @yxgeee @hills-code
nah,code has not been released yet
Hi,
If I get it correctly, you have used code from https://github.com/allenai/open-instruct as base.
Would you release the full code of reproducing llama2 pro 8B?
Thanks!